Roger L. Cauvin
Roger L. Cauvin
The advice to provide the candidate IDs is very interesting, @entylop. I haven't seen examples that use it. In my retrieval model, which has context features for the query and...
> @rlcauvin to use candidate ids you just change your line: > > ```python > loss = self.task(query_output, candidate_output, candidate_ids=candidate_ids) > ``` > > Also no metrics do not affect...
> @rlcauvin, you pass the positive `candidate_ids`. If you construct the `Retrieval` task with `remove_accidental_hits=true`, then the loss calculation will ensure that any sampled negatives that are the same item...
Thanks again, @patrickorlando. Your explanations and examples help a lot. My larger challenge is that I have a retrieval model (of user clicks on items) that just doesn't learn, seemingly...
A belated thank-you to @patrickorlando and @josealbertof. I set this conundrum aside for a few weeks but got back to it a few days ago. Here are my conclusions. First,...
@deychak Thanks for pointing us to `candidate_sampling_probability`. What is an example of how you populate the transit for that parameter?
Thanks, @deychak. I did see in the research paper that we can use the candidate frequencies from the training data. My question was more around the structure of the tensor...
@maciejkula Would you mind explaining why `tf.saved_model.save` behaves differently from `keras.models.save_model`? Does the error that @RAHEYO got reveal a bug in `keras.models.save_model`?
@patrickorlando You wrote that the general approach is train the retrieval model on only positive interactions, and the ranking model on both positive and negative feedback. You also mentioned the...
Thanks, @patrickorlando. My current recommender project is binary classification where the user is presented with a single item and decides whether to click. I've been unable to get the retrieval...