IDvs.MoRec icon indicating copy to clipboard operation
IDvs.MoRec copied to clipboard

End-to-end Training for Multimodal Recommendation Systems

Results 6 IDvs.MoRec issues
Sort by recently updated
recently updated
newest added

self.fc = MLP_Layers(word_embedding_dim=num_fc_ftr, item_embedding_dim=args.embedding_dim, layers=[args.embedding_dim] * (args.dnn_layer + 1), drop_rate=args.drop_rate)这段代码在训练是对输入的embedding进行了转换,然后再与候选的正负样本计算相似度以及BCE损失, 在模型预测时,为啥是直接使用item_embeddings而不需要经过上面得MLP_Layers呢? item_embeddings = item_embeddings.to(local_rank) with torch.no_grad(): eval_all_user = [] item_rank = torch.Tensor(np.arange(item_num) + 1).to(local_rank) for data in eval_dl: user_ids, input_embs,...

I am intrigued by your work and have a few questions to discuss with you. You conducted hyperparameter search for IDRec and MoRec, could you provide the optimal hyperparameters that...

Which article proposed In-batch debiased cross-entropy loss? Can you provide relevant literature?

Firstly, thanks for your insightful job. I am wondering where is the training code of DSSM?

Thanks for sharing this interesting work. I was wondering if you are going to share the scripts for fine-tuning LLMs.