Recurrent-Knowledge-Graph-Embedding
Recurrent-Knowledge-Graph-Embedding copied to clipboard
Result is worse than result of pre-embedding
Sorry to bother you,
I ran the code of RKGE
But I found due to the reason that the number of positive samples and negative samples is unbalanced. (1 : 0.05) in your code. the loss of the model is very low. if I change to (1: 1) the loss will be higher than 0.4(per pair ).
And the result of the model pre@1, 5, 10, Mrr is 0.151, 0.108, 0.095, 0.334 respectively. But the result of the pre_embedding is 0.185, 0.122, 0.128, and 0.4211,
After the training of RKGE, the result is even worse than pre_embedding. Could you kindly show the complete version of your code?
I meet the same problem. It's not provided how to get the pre_embedding in the paper. After loading the pre_embedding, the performance is worse and worse with the traning of more epochs.
Have you solved it yet?
有问题很正常,我觉得这个代码有问题,明明训练的是获取路径的打分,那么验证的时候应该也是分析每条路径的打分,然后排序推荐,但是这个代码里面的验证却是直接用【user的嵌入】与【item的嵌入】求内积来打分推荐,没搞懂