Juan Carlos Rendon
Juan Carlos Rendon
well I install some hours ago the spotlight lib on conda, versión 0.1.1, this no is the latest?
i uninstall spotlight, clone the github repository and install it again, but i get the same error, i convert to float32 and now works.. let me continue playing... thanks
why when i use model.predict(id_user) i got this error: ``` Traceback (most recent call last): File "conda1.py", line 69, in print model.predict(1) TypeError: predict() takes exactly 3 arguments (2 given)...
dont be afraid, we love your work :heart: i try that, and now i get the next error: ``` predictions = model.predict(np.array[1, 1, 1], dtype=np.int64, np.array[1, 2, 3], dtype=np.int64) SyntaxError:...
oh i see: `predictions = model.predict(np.array([1, 1, 1], dtype=np.int64), np.array([1, 2, 3], dtype=np.int64))` but now I'm only get this: [ 0.02556127 0.03148273 0.02408132] Is the score?, the item_id's are missing?...
I'm only get this: `[ 0.02556127 0.03148273 0.02408132]` Is the score?, the item_id's are missing? thanks EDIT: ohh right...is the score for the specified items..
but when i try with this: ``` INPUT: predictions = model.predict(1) OUTPUT: [ 0.01160015 0.02628824 0.02972064 0.02319316 0.02643632 0.03024803 0.02928654 0.03212331 0.03058725 0.02894426 0.02387976 0.0281925 0.03137119 0.03292302 0.02974606 0.0244297 0.0088104...
i have a dataset (user/item/rating) where the rating value would be only 0 or 1, whick model is the best for that. because I try use the ImplicitFactorizationModel, but i...
but i see sometimes the models consider the "0" value as "unseen item", but in my case the 1=like and 0=dislike, 0 != "unseen", this may affect the results. so...
I run the experiment with ExplicitModel, and now i got a score between 0.55340898 (min) and 1.06373000 (max). I try to understand, this is a probability. But why if the...