implicit
implicit copied to clipboard
Discrepancy: als.rank_items and als.explain return different scores.
I'm seeing a difference between the total_score (as returned by model.explain) and the tuple(itemid, score) returned by the model.rank_items function. Both functions are calculated using the same userid, user_items, itemid.
I thought these were measuring the same thing. Could someone shed some light onto what's going on? As a check, could someone verify what each of these metrics represent?
same issue
Even if this has been open for a long time, I have been looking into this as well. What is your discrepancy? I have a very little discrepancy around the 4th digit after the comma, and I assume this might be due to the cholesky decomposition happening to compute the user weights. Be aware that for the model.explain you have to use the weighted confidence matrix.
@lucatrovato Heyy, I am not sure what do you mean by the weighted confidence matrix, can you elaborate which one exactly (I know it's been a while since you last comment, hope you still remember this detail)