ArcaneGAN
ArcaneGAN copied to clipboard
cpu inference colab
Hi, is cpu inference possible in colab?
Hi, may I ask if you have resolved to run the inference on cpu, or you just decided to forget about this issue? Because I am also interested about the inference on cpu, and until now I was not able to resolve, so any help would be appreciated! Thanks
Hi, You've closed the issue, so I've thought you figured this out :D Which notebook are you talking about?
I am asking about the image inference colab on this link: https://colab.research.google.com/drive/1r1hhciakk5wHaUn1eJk7TP58fV9mjy_W
Just replace all the .cuda()
with .cpu()
in the code.
I guess I should add dynamic selection based on the environment.
Sure, that was the method I have tried (to replace all .cuda() with .cpu()), but it did not work, unfortunatelly. I can't remember the error message by heart, but later I will check again and I will send some logs.
Ah, probably half-precision isn't supported on CPU, so try replacing .half()
with .float()
as well.
This might still not work because of hardcoded .jit datatypes inside the model, but worth a try.
I have replaced all the .cuda() with .cpu() and all the .half() with .float(), but still I get this error: RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
The line, which cause this error is the following: model = torch.jit.load(model_path).eval().cpu().float()
I think I have found some kind of a solution... (maybe not the best)
- replace all the .cuda() with .cpu()
- replace all the .half() with .half().float()
- add map_location='cpu' parameter to the torch.jit.load
- use torch=1.8.1