GraphSAGE
GraphSAGE copied to clipboard
Why use link_pred_layer loss
Dear Authors:
May I know why you use link_pred_layer.loss? Is it the same as what you describe in the paper? Does it use hinge loss?
self.loss += self.link_pred_layer.loss(self.outputs1, self.outputs2, self.neg_outputs)
Can you explain a little more about the loss function?
Thanks.
As far as I know, the link_pred_layer.loss is for unsupervised learning. The name of itself also tells that the unsupervised learning is converted to a kind of link prediction task. The loss in the original paper is equivalent to _xent_loss in the code, which is a typical word2vec skip-gram negative-sampling loss which is roughly equivalent to nce loss. There is a defined hinge loss in the code as well, which is named a _hinge_loss, if you want, you could use it.
Yes the answer is correct.
In addition, you can also add the link pred as auxiliary loss for supervised learning.