Nicolas Hug

Results 570 comments of Nicolas Hug

Thanks for the report @amohar2 , I think you're right, looks like there are lots of unnecessary allocations. ```py for u in range(trainset.n_users): # Might as well allocate the max...

what data are you using? Can you show the score? It's possible that they're all NaNs or all equal

You can see details here: https://github.com/NicolasHug/Surprise/blob/00904a11c39f4871102fa6daf0899cf9993a790d/surprise/prediction_algorithms/matrix_factorization.pyx#L260-L269 1: global mean + item bias 2: global mean + user bias 3: global mean

> is the implementation you showed me effective for this case Not really, the mean of all ratings is pretty un-informative when it comes to recommending personalized items. The problem...

Also check this thread: https://github.com/NicolasHug/Surprise/issues/208. There are forks out there that have tried to tackle this problem in surprise. I haven't checked them in details though

Torchvision has a CI job that runs daily and tries to download the datasets that torchvision exposes. If there's a failure, the job would open an issue like [these ones](https://github.com/pytorch/vision/issues?q=is%3Aissue+sort%3Aupdated-desc+is%3Aclosed+Scheduled+workflow+failed)....

I haven't really thought about it, but I don't see any obvious reason for making it random. When there are failures, debugging can become a pain pretty quickly, and any...

Maybe this issue could be closed, see https://github.com/pytorch-labs/torchtune/issues/341#issuecomment-1963951352

> Do we recommend any alternatives? This would be case-by-case. For the TorchServe example the simple alternative is to copy/paste the one functionality that was used from torchtext into the...

Hi @ksachdeva , thanks for the detailed report. I'm not sure I completely follow everything yet, but I think the line you're referring to is similar to these ones, from...