MMdnn
MMdnn copied to clipboard
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
Platform (like ubuntu 16.04/win10): Ubuntu 16.04 Python version: python 3.6 Source framework with version (like Tensorflow 1.4.1 with GPU): pytorch 1.2.0 with GPU torchvision 0.4.0 Destination framework with version (like CNTK 2.3 with GPU): tensorflow 1.8.0 with GPU Pre-trained model path (webpath or webdisk path): local disk Running scripts: mmtoir -f pytorch -d mobilenet_v2 --inputShape 3,224,224 -n test_model.pth
The model was trained with 4 GPUs, and I saved the architecture and weights of the net. When I excute the scripts "mmtoir -f pytorch -d mobilenet_v2 --inputShape 3,224,224 -n test_model.pth", I encountered the error:RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu, i dont know where is worng, any advices?
Here is the whole error information:
Traceback (most recent call last):
File "/home/xx/anaconda3/envs/py36-1/bin/mmtoir", line 11, in
any reply is welcomed~
I have the same problem and I don't know who to assess what device the parameters are on. Hopefully someone responds.
Maybe change this line to dummy_input = torch.autograd.Variable(torch.randn(shape), requires_grad=False).cuda()
would help.
Thanks!
I have a similar problem in computing the FLOPs. The error is corrected by using net instead of net.module.
"module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu"
For anybody still encountering this issue: This error often happens when the nn.Model is not passed to the GPU. Make sure to
- Select your desired GPU with e.g.
torch.cuda.set_device('cuda:0')
and - call
model.cuda()
for your nn.Model
@Linus4world it worked, thanks
"module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu"
For anybody still encountering this issue: This error often happens when the nn.Model is not passed to the GPU. Make sure to
- Select your desired GPU with e.g.
torch.cuda.set_device('cuda:0')
and- call
model.cuda()
for your nn.Model
What about I use CPU only?
I have the same question ,what about just use cpu ?