Thomas Kipf

Results 202 comments of Thomas Kipf

Yes, you could simply take a non-symmetric adjacency matrix and normalize with D^(-1)*A instead of D^(-1/2)*A*D^(-1/2). Have a look at this paper for more details: https://arxiv.org/abs/1703.06103 On Mon, Mar 4,...

No worries! Your intuition is correct in the absence of _any_ node features, i.e. if you pick an identity matrix for the initial feature matrix. If, however, you describe nodes...

Yes, sure! https://arxiv.org/abs/1509.09292 uses node degree as an initial feature, for example.

I do not intend to release this implementation as part of this repository. But it shouldn't be too difficult to implement this yourself :-) Edit: PRs are welcome

I don't have the capacity to give advice on individual projects or code sent to me, unfortunately. On Tue, May 21, 2019 at 1:04 PM Chengmeng94 wrote: > I use...

The current setup of this code release is optimized for transductive learning. For inductive learning, have a look at this recent paper: https://arxiv.org/abs/1706.02216 Nonetheless you can do inductive learning with...

What you describe sounds like a transductive learning problem, i.e. you should be able to use our code 'out of the box' with no or only minor modifications. Just make...

Yes -- and for the unlabeled data you can provide 0-vectors as labels (i.e. all 0s). Just make sure these are masked/skipped in the loss function.

1) We simply leave unlabeled data points out of the sum (the total loss is a sum of per-node losses), thereby these do not contribute to the gradient. 2) For...

You can think the GCN as a method of creating node embeddings (the representation just before the classification layer at the last layer of the model). To get a node...