ReinventCommunity icon indicating copy to clipboard operation
ReinventCommunity copied to clipboard

Lib-INVENT does not work with CPU

Open jhazemann opened this issue 2 years ago • 1 comments

Hi, I tried the Reinforcement Learning demo and it worked well on my machine with 32 CPUs but then I tried the Lib-INVENT_RL2_QSAR_RF and I got the following error: RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. Is there a reason why RL demo works with CPUs but not Lib-INVENT_RL ? Thank you.

jhazemann avatar Dec 22 '21 16:12 jhazemann

Hi, thanks for reporting this. The reason for the different behavior is that these are two different generative models and somewhere down the logical flow (in _initialize_hidden_state) the model requires explicitly cuda device and is trying to load the data there. In general we use GPU enabled architecture and on generative model end we recommend it over CPU. Good portion of RL relies on CPUs as well and that could be helpful when the scoring function components are calculated, especially if docking is being used. We may address the reported issue in one of the future releases, but Im afraid it might take a while since the next one would be at least a couple of months away.

patronov avatar Dec 31 '21 01:12 patronov