PEPLER icon indicating copy to clipboard operation
PEPLER copied to clipboard

Personalized Prompt Learning for Explainable Recommendation

Results 6 PEPLER issues
Sort by recently updated
recently updated
newest added

Dear authors, Hi, I have recently read your amazing paper. I appreciate for your research because it shed lights for my work. Now i'm trying to reproduce the result so...

作者您好,我在使用Amazon的各个数据集对实验结果进行复现的时候,发现均出现了训练集上rating_loss很小但测试集上rating预测误差很大的情况(前者约为0.3左右,后者如果使用MF方法高达20左右,MLP方法约为4左右),想问一下出现这样过拟合的情况,应当如何解决(我尝试调节rating相的正则化系数,但是情况并没有发生什么较大的改变)。 期待您的回复,谢谢!

您好!关于您在TripAdvisor数据集上进行的PEPLER(MLP)实验,我们参照您给出的代码和指令设置在5个数据集上进行了实验,但实验结果均无法得到您在论文里面呈现的效果。能否提供一些具体的超参数设置以便我们更好的复现您的实验呢?谢谢您!

I wonder if you guys tried to simply average the embeddings for the title to initialize the continuous prompts. I feel that it can be a simpler solution to the...

大佬,在tuning prompt only时候,main.py的第79行代码执行后,模型token embedding会产生梯度,这个时候是更新了item, user, token embedding三个东西吧?

您好,我目前在复现您的离散提示学习部分的实验,但是在使用数据集Yelp进行复现的时候报错 Token indices sequence length is longer than the specified maximum sequence length for this model (1034 > 1024). Running this sequence through the model will result in indexing errors...