Thomas Kipf
Thomas Kipf
You can treat this choice as a hyperparameter and try both versions if you want to be sure -- there will most likely be a slight difference in performance. On...
Thanks! Right here: https://github.com/tkipf/pygcn/blob/master/pygcn/layers.py#L18 On Mon, Apr 20, 2020 at 2:16 AM fansariadeh wrote: > Dear Kipf, > > Congrats for your wonderful work and well-written code. > It is...
There are many ways to batch operations for multiple graphs. If all graphs are of the same size and not very large (and you don't care about sparsity), then stacking...
Have a look at the original source of the CORA dataset. The README file should specify how the dictionary/vocabulary was created. https://linqs.soe.ucsc.edu/data On Sun, Feb 3, 2019 at 3:52 PM...
The normalization is carried out in https://github.com/tkipf/pygcn/blob/4396e4db5b97c9e071516bc601c9b6693696c489/pygcn/utils.py#L56 -- function: `normalize`. Note that this implementation uses a simpler row-wise normalization `D^{-1}A` instead of `D^{-1/2}AD^{-1/2}`. Both perform similar in practice. On Sun,...
You can extract and examine any hidden layer activation and check whether it is useful as some form of graph embedding. If you train in a supervised way, then these...
In this case it’s best to simply take the embeddings just before doing the last linear projection to the softmax logits. In other words, if the last layer is softmax(A*H*W),...
The dot product will not be a good scoring function on embeddings trained solely for classification. You can either use the embeddings from github.com/tkipf/gae which are optimized for dot-product scoring...
Note that this repository uses different dataset splits and a slightly different model architecture than in our original paper. For an exact replication of the experiments in our paper, please...
Yes, the model implementation is slightly different for the PyTorch version. In this version, the adjacency matrix is normalized by left-multiplication with the inverse of the degree matrix (instead of...