recommenders
recommenders copied to clipboard
Calling Predictions from Models
For all the co-lab examples (except for basic_retrieval), there isn't a clear way to get predictions from the models. For basic_retrieval, the use of the FacorizedTopK metric tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(songs.batch(64).map(song_model))
, and use of brute force to get the index tfrs.layers.factorized_top_k.BruteForce(model.user_model)
is presented and works. However, for basic_ranking, context_features, deep_recommenders, multitask, and dcn, the code example ends at evaluation of test/dev sets.
So, the question is how does one do predictions? How does one infer from the model?
An option I have tried is calling model.predict() or model(), but that gives an error. The error codes below are from code I have run after running through the entire basic_ranking model co-lab notebook.
for i in train.take(1):
print(i)
model.predict(i)
{'movie_title': <tf.Tensor: shape=(), dtype=string, numpy=b'Postman, The (1997)'>, 'user_id': <tf.Tensor: shape=(), dtype=string, numpy=b'681'>, 'user_rating': <tf.Tensor: shape=(), dtype=float32, numpy=4.0>}
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-68-be1677278cba> in <module>()
1 for i in train.take(1):
2 print(i)
----> 3 model.predict(i)
4 model(i)
4 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_shape.py in __getitem__(self, key)
887 else:
888 if self._v2_behavior:
--> 889 return self._dims[key].value
890 else:
891 return self._dims[key]
IndexError: list index out of range
And,
for i in train.take(1):
print(i)
model(i)
{'movie_title': <tf.Tensor: shape=(), dtype=string, numpy=b'Postman, The (1997)'>, 'user_id': <tf.Tensor: shape=(), dtype=string, numpy=b'681'>, 'user_rating': <tf.Tensor: shape=(), dtype=float32, numpy=4.0>}
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-69-fb4f109a1076> in <module>()
1 for i in train.take(1):
2 print(i)
----> 3 model(i)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in call(self, inputs, training, mask)
444 a list of tensors if there are more than one outputs.
445 """
--> 446 raise NotImplementedError('When subclassing the `Model` class, you should '
447 'implement a `call` method.')
448
NotImplementedError: When subclassing the `Model` class, you should implement a `call` method.
Thanks for your help!
I recognize that there are related issues #248 and #215, but haven't been able to implement either solutions. Is there a way to do predictions without adding additional code within the *Model classes?
The actual part of the model that's responsible for generating predictions is model.ranking_model
. Have you tried calling model.ranking_model(batch)
?
@maciejkula Could you clarify what format/structure batch
should be in the call model.ranking_model(batch)
? I'm having the same challenge as @phillipshaon.
@pemoriarty @phillipshaong I have the same issue as yours. I want to predict the ranking in the ranking part. Have you found any solution for that?
I did actually get it working, but unfortunately I don't have access to the code anymore. @felixDulys, maybe you could help answer this now?
ranking_model
I think what you suggest would work and it is similar to what I did (by random gues..), I would like to know more what exactly the model call after I call model.predict
(sorry I am totally new to TF keras..), seems it calls the call function..?
I did make it work by adding this line into the MovielensModel
, but not sure this is a good solution though
def call(self, features: Dict\[Text, tf.Tensor\]):
rating_predictions = self.ranking_model(
(features["user_id"], features["movie_title"]))
return rating_predictions
Check out this section: https://www.tensorflow.org/recommenders/examples/basic_ranking#testing_the_ranking_model
I second @xiaoyaoyang -- it looks like @pemoriarty (hi! hope you are well :-) ) used a call()
method inside of the ranking model class (which is a subclass of tf.keras.Model
) like:
def call(self, inputs):
user_id, movie_id = inputs
user_embedding = self.user_embeddings(user_id)
movie_embedding = self.movie_embeddings(movie_id)
return self.ratings(tf.concat([user_embedding, movie_embedding], axis=1))
Note that the initialization function in that class holds the architecture of the model. self.movie_embedding
and the other embedding are defined in the initialization function in the same class and are keras.Sequential
objects. Also note that the inputs are a tuple.
Then, when you have trained your model and you want to make predictions, it looks like she did some specific transformations / typing:
import itertools
import pandas as pd
import numpy as np
prediction_df = pd.DataFrame(list(itertools.product(guests, movies)))
prediction_cols = ["guest_id", "movie_id"]
prediction_dict = {
"guest_id": np.array(tuple(prediction_df["guest_id"])),
"movie_id": np.array(tuple(prediction_df["movie_id"]))
}
prediction_scores = model.call((prediction_dict["guest_id"], prediction_dict["movie_id"]))
Hopefully this will work for you! Apologies for the delay. Good luck!