recommenders
recommenders copied to clipboard
Interpretation of scores and how to find the scores confidence level?
Hello there,
1- I am wondering if the user-item scores (ie query-candidate scores) outputted by model prediction are interaction-probability or affinity scores?
2- Is it possible to compare the scores across users? for example, let's assume the triplet of (userID, itemID, score) for a user is (u_i, I_j, s_ij) and for another user is (u_ii, I_jj, s_iijj) such that u_i <> u_ii and I_j <> I_jj and s_ij > s_iijj. Can we conclude that the u_i likes I_j more than u_ii likes I_jj?
3- What is the best and most computationally efficient way of calculating how much confidence there is for each user_item interaction? In another word, I am looking for this quadruplet (u_i, I_j, s_ij, cs_ij) such that the s_ij is the score associated with the user i interaction with item j and cs_ij is the confidence score associated with the original score s_ij and demonstrate how much confidence there is in the predicted scores?
@maciejkula any pointer on the questions above is much appreciated!