Tung Dang
Tung Dang
Hi @benfred, I have ALS model with more than 40 mil. users and about 10k items with approx. 500 mil. ratings. The maximum memory I could get from one GPU...
@benfred: yes, reducing the number of factor is what we did to fit it into our GPU. Having another GPU is a bit difficult since we rely on Google Cloud...
You could add following code into the Query class and provide the types to get the correct query. E.g: ```python class Query ... def to_query(self, types): if self.agg_ops[self.agg_index]: rep =...
Ok, I have found a workaround for this. Basically I needed to overwrite the `__eq__` method by providing my own Prediction class and wrap the result using the new class....
any update to this issue?
@benoitgoujon: The login ui is shown but sadly the token exchange protocol does not work after that, the client still needs to resolve the host from redeem_url I have found...