SimKGC
SimKGC copied to clipboard
Questions about the loss function
I'm interested in how to implement InfoNCE in your code using torch.nn.CrossEntropyLoss() function, but I don't seem to have found good learning material, can you explain to me why InfoNCE can be implemented in this way in the provided code?
In addition, I would like to ask, how should this part of the code be understood?
# head + relation -> tail
loss = self. Criterion(logits, labels)
# tail -> head + relation
loss += self. Criterion(logits[:, :batch_size].t(), labels)
How to understand "tail -> head + relation"? I would appreciate your help! Looking forward to your reply!
You can refer to the answer here: https://github.com/intfloat/SimKGC/issues/10#issuecomment-1296792284
The InfoNCE loss is basically a cross entropy loss but the labels are not pre-defined like in text classification.
thanks for your reply!
You can refer to the answer here: #10 (comment)
The InfoNCE loss is basically a cross entropy loss but the labels are not pre-defined like in text classification.
您好,请问是否可以仅使用head + relation -> tail?
当然可以,把第二行loss += ...注释掉就行,但效果会下降一点
非常感谢
当然可以,把第二行
loss += ...注释掉就行,但效果会下降一点
请问我在训练过程中将第二行 Loss+=...注释之后,在测试过程中也需要注释backward_metrics = .....吗
当然可以,把第二行
loss += ...注释掉就行,但效果会下降一点
因为我每次看最终结果 好像forward metrics的效果会更好
当然可以,把第二行
loss += ...注释掉就行,但效果会下降一点请问我在训练过程中将第二行 Loss+=...注释之后,在测试过程中也需要注释backward_metrics = .....吗
不需要,forward metrics 对应 给定头实体和关系,预测尾实体,backward metrics 对应 给定尾实体和关系,预测头实体。对大多数数据集来说,第一种预测任务更简单,所以metrics会好一些。
当然可以,把第二行
loss += ...注释掉就行,但效果会下降一点请问我在训练过程中将第二行 Loss+=...注释之后,在测试过程中也需要注释backward_metrics = .....吗
不需要,forward metrics 对应 给定头实体和关系,预测尾实体,backward metrics 对应 给定尾实体和关系,预测头实体。对大多数数据集来说,第一种预测任务更简单,所以metrics会好一些。
懂了 谢谢