ML_Decoder icon indicating copy to clipboard operation
ML_Decoder copied to clipboard

i got problems in using cpu infer

Open eguoguo321 opened this issue 2 years ago • 0 comments

here is my code

`model = create_model_test(args, load_head=True).cuda()
########### eliminate BN for faster inference ###########
model.load_state_dict(torch.load(args.model_path, map_location='cpu'))
model = model.cpu()
model = InplacABN_to_ABN(model)
model = fuse_bn_recursively(model)
model = model.cpu()
`

but when i deployed this on a cpu server it comes with an error:

Traceback (most recent call last): File "infer_xxx.py", line 97, in <module> main() File "infer_xxx.py", line 82, in main output = torch.squeeze(torch.sigmoid(model2(tensor_batch))) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/jovyan/Multilabel/code/ML_Decoder/src_files/models/tresnet/tresnet.py", line 204, in forward x = self.body(x) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/jovyan/Multilabel/code/ML_Decoder/src_files/models/tresnet/tresnet.py", line 120, in forward out = self.conv2(out) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/jovyan/Multilabel/code/ML_Decoder/src_files/models/tresnet/layers/anti_aliasing.py", line 18, in forward return self.op(x) File "/home/jovyan/Multilabel/code/ML_Decoder/src_files/models/tresnet/layers/anti_aliasing.py", line 40, in __call__ return F.conv2d(input_pad, self.filt, stride=2, padding=0, groups=input.shape[1]) RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

i dont know how i can use it in cpu

eguoguo321 avatar Feb 15 '23 09:02 eguoguo321