Thomas Kipf
Thomas Kipf
Replacing softmax cross entropy loss with a sigmoid cross entropy loss should do the job:) On Sun 23. Sep 2018 at 18:02 Xiaoyong Pan wrote: > Hi, > > I...
Sounds correct! On Sun 23. Sep 2018 at 18:28 Xiaoyong Pan wrote: > thanks. > > I changed the the GCN model return F.sigmoid(x) > and replace the loss function...
This is because the cross-entropy loss applies to N^2 terms, i.e. all potential edges, while the KL term only applies to N terms, i.e. all nodes. This normalization makes sure...
I think you are right, this makes the KL term indeed quite small in comparison. The model can still be seen as a beta-VAE with a very small beta parameter....
A large beta acts as a regularizer / soft constraint on the latent code, so this will typically harm link prediction performance if the parameter is too large. You should...
Can you paste the code you used to reconstruct the adjacency matrix? On Fri, Jan 18, 2019 at 6:14 PM Zheyu Zhang wrote: > Hi @tkipf , > > Have...
Maybe your training set is unbalanced? It looks like this is not necessarily a problem with this code release... On Sun, Jan 20, 2019 at 3:26 AM dawnranger wrote: >...
I see, I think I misunderstood the question - I assumed that this was related to some custom adaptation of the model or the use of some other dataset. Note...
Thanks, this is indeed a very insightful analysis. It would be interesting to see what model/loss adaptions are necessary for this model to generalize in more realistic scenarios. We have...
You can try `exp(-d(x,y))` as a scoring function for links where `d(x,y)` is the Euclidean distance between two node embedding vectors. This might be less susceptible to producing more or...