LLMs as recommenders
Hello, I am very interested in the experiment of LLMs as recommers in your paper A.3. I would like to reproduce and try it. I wonder if you can provide this part of the code or provide a detailed introduction. Case study on LLMs-based reranking. The candidate items are retrieved by LightGCN.
Hi! 👋
Thanks for your interests! Here are some instructions for the re-rank experiments.
-
Initially, we trained a LightGCN model using the Amazon dataset. Subsequently, we selected the top-30/35/40/45/50 items recommended by LightGCN as candidate items for re-ranking. We then evaluated the Recall and NDCG metrics for the re-ranked top-10 and top-20 items.
-
In our approach, we leveraged the item title as the meta information for the items. The re-ranking process was guided by the prompts illustrated in Figure 9, which also incorporated historical interacted items. The re-ranking procedure was performed individually for each user, following which the overall performance was calculated for the entire dataset.
I hope the information provided is useful :)
Best regards, Xubin
@Re-bin Thank you for the open source of this great work. A quick question: is GPT frozen or updated during training?
Best Tianyu
@Tianyu9748 Hi Tianyu!
Thanks for your interest! The GPT is frozen because we consistently used gpt-3.5-turbo during the experiments..
Best regards, Xubin
@Re-bin Thank you for the response.