GraphSAGE icon indicating copy to clipboard operation
GraphSAGE copied to clipboard

Why use link_pred_layer loss

Open lihuiliullh opened this issue 6 years ago • 2 comments

Dear Authors:

May I know why you use link_pred_layer.loss? Is it the same as what you describe in the paper? Does it use hinge loss?

self.loss += self.link_pred_layer.loss(self.outputs1, self.outputs2, self.neg_outputs)

Can you explain a little more about the loss function?

Thanks.

lihuiliullh avatar Aug 27 '19 04:08 lihuiliullh

As far as I know, the link_pred_layer.loss is for unsupervised learning. The name of itself also tells that the unsupervised learning is converted to a kind of link prediction task. The loss in the original paper is equivalent to _xent_loss in the code, which is a typical word2vec skip-gram negative-sampling loss which is roughly equivalent to nce loss. There is a defined hinge loss in the code as well, which is named a _hinge_loss, if you want, you could use it.

skx300 avatar Aug 29 '19 08:08 skx300

Yes the answer is correct.

In addition, you can also add the link pred as auxiliary loss for supervised learning.

RexYing avatar Sep 18 '19 18:09 RexYing