Ben Frederickson
Ben Frederickson
I haven't thought about apple silicon support on conda yet - but we are building binary wheels for apple silicon on [pypi](https://pypi.org/project/py-spy/#files) right now. The [py_spy-0.3.12-py2.py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl ](https://files.pythonhosted.org/packages/f4/56/3b2bc15261959abe4238aa30521a0dd155135c225b33f6d948e79314d1f3/py_spy-0.3.12-py2.py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl) wheel should work...
With the ItemItem models (CosineRecommender etc) - you don't need to retrain the model at all. The model learns an item-item similarity matrix, and when generating recommendations uses the 'user_items'...
Does the user have any liked item in the train set?
@srcolinas - I believe in this case the user will get a zero vector for their factors, because of this code: https://github.com/benfred/implicit/blob/master/implicit/_als.pyx#L165 I'm surprised that it doesn't return anything though...
There are two different things you can do here with implicit to get near-realtime updates with the ALS model : 1) You can set the `recalculate_user` flag on the `model.recommend`...
This isn't something that is explicitly supported - but should work for all the models except for the AnnoyAlternatingLeastSquares class (since annoy doesn't support pickle).
Thanks! 1. For the regularization parameter, the range you've picked seems reasonable to me. The parameter just controls the L2 regularization scheme to help prevent overfitting. I'm not sure why...
Like @leodesigner was saying - this isn't supported in this library right now, but you can build this on top of implicit with some effort. Adding support for incremental training...
This is something I'd like to add sometime soon (especially relevant since I recently started a new job at nvidia =). Are you interested in multi-gpu support for faster training...
I think reduce precision user/item factors might help here. If you're going with 64 factors per user, then storing the factors for all users will be around 10GB assuming fp32...