Patrick Orlando

Results 85 comments of Patrick Orlando

Yes to both questions. Evaluating recommender systems is hard. Online A/B tests are useful. A model that performs well on a dataset doesn't guarantee that you have a great recommender...

Hi @jillwalker99, Sure, dropout can be added to your query and item towers, but you probably will need to tune this parameter. You also might want to add L-2 normalization...

L2 normalization scales the vector by its euclidean length. It means that the outputs of your query and candidate towers will be constrained to the unit sphere. As the paper...

@jillwalker99, perhaps, there is no _one size fits all_ approach.

Hey @mustfkeskin A very uninformed guess would be that query and candidate embeddings require different scale (vector norm). Particularly if you are using the`candidate_sampling_probability` parameter for sampling bias correction. In...

Use the candidate sampling probability in the same way, calculated over the target distribution, don't worry about the query being also being an item. ```python class Item2ItemModel(tfrs.Model): def __init__(self, sku_model,...

@mustfkeskin > How do I boost popular products? `candidate_sampling_probability` does the opposite as I understand it. You are mistaken here, it does _boost_ the scores for popular items. See https://github.com/tensorflow/recommenders/issues/440...

TopK Catagorical Accuracy along with MRR or NDCG are all valid metrics, but understanding the scores for a _popular_ model baseline is important. Additionally you might want to analyse these...

Hi @OmarMAmin, Not knowing the frequency distribution of your candidates, along with the fact that you have delivery constraints make it hard for me to have any intuition on this....

The effect of folding presents when only positive examples are used to calculate the loss, for example WALS in matrix factorisation. The article suggests strategies like negative sampling to prevent...