Patrick Orlando
Patrick Orlando
@rlcauvin, you pass the positive `candidate_ids`. If you construct the `Retrieval` task with `remove_accidental_hits=true`, then the loss calculation will ensure that any sampled negatives that are the same item as...
Just the positives for that batch @rlcauvin. In practice sampling a negative that is a positive in another batch doesn't affect performance and provides some mild regularisation. You have candidate_ids...
Hey @rlcauvin, I would start with > just using user IDs and item IDs The model should learn based on just this. When you experimented with the ranking model, is...
@rlcauvin, I'm not quite sure how to help in this case. It sounds quite odd. I few thoughts come to mind: 1. How are items presented to users in your...
Hey @joaoalveshenriques, That error is because you are trying to index a tensor with a string. Somewhere in the code you think you have a dictionary, where you don't. my...
hey @amit-timalsina, the picture above describes the serving case for a two tower retrieval model.
Hey @ydennisy, As far as I know, there is no way to handle explicit negative samples in the retrieval stage. My approach would be to train the retrieval model on...
@hkristof03, The easiest way to evaluate metrics only for each validation step is to pass `compute_metrics = not training` in your call to the retrieval loss. When the metrics are...
Hi @hkristof03, I'm certainly not an expert in this area and you may have already implemented it in this way, but I'll share my thoughts. It might help if you...
You need to track the state of what user's have interacted with separately from the model. If for each user you maintain a list of item_ids that the user has...