Chong Chen
Chong Chen
It seems that the authors of MBGCN will open-source recently, then let's see what happens.
在leave-one-out的情况下recall == hr
Hi,您好,这个系数是控制l2 regression的正则化系数,防止模型过拟合的。 代码里面加上self.lambda_bilinear是为了让代码的可扩展性更强。其实在我们的模型里面它并不是一个影响大的参数,默认设置为0就可以了。因为我们的模型里面主要通过调节dropout来防止过拟合。
是的,因为优化的过程实际上包含对loss的求导过程,常数项的导数为0,去掉并不会影响该loss function的结果:)
您好,python2.7.12 tensorflow 1.7.0
> 你好 我尝试了下 发现这两个版本并不兼容 想问下 您是怎么解决的 跑起来了么 哪里不兼容呢?
嗯嗯,有什么问题可以再交流讨论~
Hi, thanks for your interest in our work! For the first question, as you have mentioned, we used the same Last.fm dataset as the CFM paper for objective comparison. The...
Hi, thanks for your question! We use the leave-one-out evaluation protocol, under this setting, recall@k is equal to hit@k. recall is the fraction of the target items that are successfully...
In fact, we downloaded the processed datasets Frappe and Last.FM. directly from CFM GitHub, so we also don't have the datasets without preprocessing. Maybe you can ask the authors of...