lightfm
lightfm copied to clipboard
Social network use case: Handle user to user interactions (items = users, reciprocal)
Hi there,
first: Many thanks for the awesome library!
I would like to ask if the following use case of "social recommendations" is possible with LightFM (in a more or less efficient way).
Scenario
Suppose a user u0
who is befriended with u1
wants to get new user recommendations.
The recommendation goal is not to recommend new items in the classical sense but to recommend new users.
Since there are other types of interactions with varying importance, the interaction matrix is weighted (say friendship equals a weight of 1):
(0, 1) 1
(1, 0) 1
Because friendship is bidirectional, both interactions are in the matrix, but unidirectional (one-sided) implicit interactions such as "view" should also be possible.
If there are multiple interactions between two users, the interaction weights are summed up (smoothed e.g. by using a log
function).
Consequently, items are not a different entity and items effectively become users.
Now, features such as gender
or age
are also present to combat the cold start problem and fully utilise the efficient hybrid recommendation capabilities of LightFM.
LightFM is trained as follows with 2 effective inputs (interaction_matrix
and user_matrix
, both built using the Dataset
class):
lightfm.fit(interactions=interaction_matrix, sample_weight=interaction_matrix,
item_features=user_matrix, user_features=user_matrix)
To save memory, the interaction_matrix
is reused for the weights, too (as stated in the docs).
Since the items' features (i.e. what is to be recommended) do not differ from the users' features, the same matrix is used.
Questions
- Is the above scenario possible? If yes, is there a way to make it more efficient?
- As far as I understand, the embeddings and biases for users and items are learned separately using multi-threaded SGD. Should the added identity matrix (built by the
Dataset
class) be used for theitem_features
as well as for theuser_features
parameters or is it actually making the model less expressive, preventing transfer learning to occur?
I searched a lot but could not find a similar use case in other issues since most LightFM users want to recommend real items != users
(movies, products, questions, etc.).
Any help or suggestions appreciated! If LightFM is not suitable for the mentioned scenario, that would be good to know, too.
[Edited title: I've read that such specializations of recommender systems are also called "reciprocal".]