Patrick Orlando
Patrick Orlando
Its not very common to require the probabilities, but if you do, you need to compute the scores for all candidates, which will only work in the brute force retrieval....
There have been some API changes since this issue was first posted. Indexing a Retrieval layer is now achieved with the `index_from_dataset` method. This method expects a dataset that contains...
Hey all, There isn't really a straightforward answer here, but hopefully this helps. Both approaches are valid but might produce different results. The decision of which to use is based...
@naarkhoo if you have a bounded number of candidates that is different for each user then you don't really want a retrieval index. You would just pass your candidates to...
The general approach that is to train the retrieval model on only positive interactions, and to train a ranking model that predicts the satisfaction based on positive and negative feedback....
Hey @rlcauvin, I think you are right in the sense that a multitask model won't be suitable and you'd need to train two separate models. The crux of the problem...
Welcome @LaxmanSinghTomar! Your general approach is great. Recommendation use cases are all very unique and so it's hard to prescribe or validate an approach without trying it. A lot of...
Hi @hkristof03 :wave: In the end, your query and candidate embeddings need to be the same size. People often achieve this with dense layer(s) after your feature embeddings. However, if...
Some things that come to mind: - Ensure your batch size is reasonably large 1024+. - Confirm that the output shapes from your query and candidate towers are `(batch, query_dim)`....
This concept is discussed in https://github.com/tensorflow/recommenders/issues/388#issuecomment-941254103 and the comments following. To make this work you must re-construct the index before each call to `model.evaluate()` to update the candidate embedddings.