Xubin Ren
Xubin Ren
Hi! 👋 Thanks for your interests! Here are some instructions for the re-rank experiments. 1. Initially, we trained a LightGCN model using the Amazon dataset. Subsequently, we selected the top-30/35/40/45/50...
@Tianyu9748 Hi Tianyu! Thanks for your interest! The GPT is frozen because we consistently used `gpt-3.5-turbo` during the experiments.. Best regards, Xubin
Hi 👋! Thanks for the contribution. We will add it ASAP :) Best regards, Xubin
嗨,你好👋! 感谢对RLMRec的关注,验证集和测试集的区别只是不同的user-item interactions,这些都是随机划分的,验证集和测试集会有由于测试样本的不同导致的性能差异,它们并没有涉及多模态的表征。 我猜测您遇到的情况可能是由于不同的多模态表征融入方法可能存在有性能稳定性的差异,导致的效果浮动。同时,模型训练的标准流程是依据模型在验证集上的效果来判断收敛并选取最优的模型参数,而后才在测试集上测试最终效果,因此自然其在验证集上的效果会有所优势。 希望上述回复能帮助到您:) Best regards, Xubin
嗨,你好👋 感谢对SSLRec的关注!关于数据集的构建其核心就是将用户-商品的交互变成0-1稀疏矩阵,具体的格式可以参考[数据集格式介绍](https://github.com/HKUDS/SSLRec/blob/main/docs/Datasets.md)。 Best regards, Xubin
Hi 👋! Thanks for your interest in RLMRec! I think you can utilize the batch API provided by OpenAI or multi-thread programming for profile generation to accelerate the process. Additionally,...
Hi 👋! Thanks for pointing that out! I will fix the typo on the homepage :) Best regards, Xubin
Hi, This is because the [SSLRec paper](https://arxiv.org/abs/2308.05697) and the [LightGCN paper](https://arxiv.org/abs/2002.02126) use two different datasets (with differing numbers of users, items, and interactions), although both are processed from the Gowalla...
嗨 👋, 感谢关注!该工作跟随之前相关开源工作例如[SGL](https://github.com/wujcan/SGL-TensorFlow)和[HCCF](https://github.com/akaxlh/HCCF),在训练过程中没有用上验证集。因此,为了保证公平,模型训练完后直接在测试集上测试。 不过,在我们后续开发的统一的自监督推荐算法库[SSLRec](https://github.com/HKUDS/SSLRec)中,有CF场景,并且包含了对验证集使用和early stop的代码,更符合工业界场景。 希望上述回答可以帮到你:) Best regards, Xubin
你好! 感谢关注!需要在python环境中安装一下`yaml`库才可以。 Best regards, Xubin