Roger L. Cauvin

Results 39 comments of Roger L. Cauvin

@MNMaqsood Thanks for the advice regarding context features, separate retrieval and ranking models. I've done both of those things, and the retrieval model just doesn't learn when evaluated against the...

Which metrics is it outputting? RMSE?

What do you mean by "enhance the output"? Do you want to compare the metrics (e.g. top K categorical accuracies) for the combined output of retrieval and ranking to that...

We generally expect ranking models to be more predictive. However, it might depend on the metric. Using ROC-AUC as the evaluation metric, you will very likely see better predictions from...

The tutorials split the input data into training and test samples. The test sample includes the "actual" or "ground truth" values. After building the retrieval and ranking models, generate predictions...

If you do what I described above (compute evaluation metrics for combined retrieval and ranking), as well as the metrics for retrieval alone, you can compare them on the test...

I'm gathering that you want sort of a real-world before and after comparison? The positive engagement rate was X when using retrieval for recommendations, and it changed to Y after...

If you want to exclude watched movies from the recommendations, use [query_with_exclusions](https://www.tensorflow.org/recommenders/api_docs/python/tfrs/layers/factorized_top_k/BruteForce#query_with_exclusions) for retrieval, then do the ranking.

The [basic ranking tutorial](https://www.tensorflow.org/recommenders/examples/basic_ranking) covers the ranking stage, but it assumes the users rate items on a scale of 0.5 stars to 5.0 stars. However, you may adapt it to...

At long last I'm following up on the topic of getting a retrieval model with query and candidate features to learn. The keys were using tf.keras.losses.CategoricalCrossentropy(from_logits = True) for the...