lightfm icon indicating copy to clipboard operation
lightfm copied to clipboard

precision_at_k is slower than training

Open dpalbrecht opened this issue 2 years ago • 2 comments

How come evaluation, precision_at_k, is so slow? On the training set, it takes 2.5 minutes per epoch but the test set at 10% the size has taken at least 10 minutes and there's no way to tell how much longer it'll take. Anyone else have this issue?

test_precision = precision_at_k(model, test_matrix, k=12, train_interactions=train_matrix).mean()

dpalbrecht avatar Feb 24 '22 05:02 dpalbrecht

It is pretty normal that evaluation takes much longer than training because evaluation necessitates making lots of predictions. I cannot say anything about your specific timings though.

Generally, I would recommend to test your code on small / medium samples to make sure that it runs through / debug.

Hope that helps.

SimonCW avatar Feb 25 '22 12:02 SimonCW

Thanks @SimonCW! Is that really true? Don't both training and evaluation require a forward pass, and then the only difference is updating weights vs. calculating a metric where calculating precision is usually incredibly fast? Perhaps evaluation makes a single prediction at a time for memory's sake, and I also see that it removes items that a user has already interacted with so that will make it slower (although a bit too opinionated in some cases).

dpalbrecht avatar Mar 05 '22 16:03 dpalbrecht