deep-text-recognition-benchmark
deep-text-recognition-benchmark copied to clipboard
RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 1 in sequence argument at position #1 'tensors'
RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 1 in sequence argument at position #1 'tensors'
ubuntu 18.04 cuda 10.1 torch 1.3.1
Hello,
Could you upload the messages just before the error msg?
I'm also experiencing this issue.
Windows 10 (1809) torch 1.4.0 cuda 10.1
Here's how to reproduce.
First, I adjusted the model to support ONNX convesion by changing the following in model.py:
self.AdaptiveAvgPool = nn.AdaptiveAvgPool2d((512, 1)) # 512 used to be None
Then I used the following to train from scratch:
python train.py --train_data data\train --valid_data data\validation --select_data MJ-SJ --batch_ratio 0.5-0.5 --Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn
Then I try to load the model for inference on dummy data.
from mlflow import pytorch
import torch
device = torch.device('cuda')
model = pytorch.load_model(r'saved_models\TPS-ResNet-BiLSTM-Attn\07_20_20-1206\best_accuracy')
model.module.eval()
x = torch.randn(1, 1, opt.imgW, opt.imgH)
_ = model.module(x)
Traceback:
Traceback (most recent call last):
File "to_onnx.py", line 15, in <module>
_ = model.module(x)
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\scottmcallister\xact_model\benchmark_model\saved_models\TPS-ResNet-BiLSTM-Attn\07_20_20-1206\best_accuracy\code\model.py", line 70, in forward
input = self.Transformation(input)
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\scottmcallister\xact_model\benchmark_model\saved_models\TPS-ResNet-BiLSTM-Attn\07_20_20-1206\best_accuracy\code\modules\transformation.py",
line 31, in forward
batch_C_prime = self.LocalizationNetwork(batch_I) # batch_size x K x 2
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\scottmcallister\xact_model\benchmark_model\saved_models\TPS-ResNet-BiLSTM-Attn\07_20_20-1206\best_accuracy\code\modules\transformation.py",
line 78, in forward
features = self.conv(batch_I).view(batch_size, -1)
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\container.py", line 100, in forward
input = module(input)
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _thnn_conv2d_forward
Conversely, if I tried to use the cpu:
from mlflow import pytorch
import torch
from torchvision import transforms
device = torch.device('cpu')
model = pytorch.load_model(r'saved_models\TPS-ResNet-BiLSTM-Attn\07_20_20-1242\best_accuracy', map_location='cpu')
model.module.eval()
x = torch.randn(1, 1, opt.imgW, opt.imgH)
_ = model.module(x)
Traceback:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\scottmcallister\xact_model\benchmark_model\saved_models\TPS-ResNet-BiLSTM-Attn\07_20_20-1242\best_accuracy\code\model.py", line 70, in forward
input = self.Transformation(input)
File "C:\Users\scottmcallister\anaconda3\envs\xact_model\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\scottmcallister\xact_model\benchmark_model\saved_models\TPS-ResNet-BiLSTM-Attn\07_20_20-1242\best_accuracy\code\modules\transformation.py",
line 32, in forward
build_P_prime = self.GridGenerator.build_P_prime(batch_C_prime) # batch_size x n (= I_r_width x I_r_height) x 2
File "C:\Users\scottmcallister\xact_model\benchmark_model\saved_models\TPS-ResNet-BiLSTM-Attn\07_20_20-1242\best_accuracy\code\modules\transformation.py",
line 158, in build_P_prime
batch_size, 3, 2).float().to(device)), dim=1) # batch_size x F+3 x 2
RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 0 in sequence argument at position #1 'tensors'
@csmcallister did you convert TPS-ResNet-BiLSTM-Attn module to onnx format? could you please share the idea?
@csmcallister did you convert
TPS-ResNet-BiLSTM-Attnmodule to onnx format? could you please share the idea?
Unfortunately not. Sorry!