pupupudding

Results 4 comments of pupupudding

100+个batch之后训练集和验证集的loss下降都很不明显,判断converge主要是看mrr_5这几个数值的变化情况吗?

感谢原教授~我的实验复现成功了~ 然后,这可能是另一个问题:为什么demo数据里的pretrain和finetune数据都是一一对应的?如果在pretrain时加入更多在target domain没有交集用户的数据,对系统整体性能会有什么影响?

Can you please check again? I have printed the value of negtive_samples for every iteration and it's always 99 (same as the negtive_samples argument).

I see. I didn't notice it was the top-k retrieval within the batch. Sorry for the bothering~ On Thu, May 27, 2021 at 5:54 PM fage ***@***.***> wrote: > the...