torch2trt
torch2trt copied to clipboard
Driver error when addition or multiplication more than 2 times
Test code:
import torch
import torch.nn as nn
from torch2trt import torch2trt
class TestModel(nn.Module):
def __init__(self):
super(TestModel,self).__init__()
def forward(self, x):
return x+x+x
# Only occur when using cuda
model = TestModel().cuda().eval()
x = torch.rand((1, 3, 224, 224)).cuda()
model_trt = torch2trt(model, [x])
Log:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-5-ca2b186b88d6> in <module>
8 model = TestModel().cuda().eval()
9 x = torch.rand((1, 3, 224, 224)).cuda()
---> 10 model_trt = torch2trt(model, [x])
/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6-linux-x86_64.egg/torch2trt/torch2trt.py in torch2trt(module, inputs, input_names, output_names, log_level, max_batch_size, fp16_mode, max_workspace_size, strict_type_constraints, keep_network, int8_mode, int8_calib_dataset, int8_calib_algorithm, int8_calib_batch_size, use_onnx)
553 )
554
--> 555 engine = builder.build_cuda_engine(network)
556
557 module_trt = TRTModule(engine, input_names, output_names)
RuntimeError: Driver error:
Environment:
- Ubuntu18.04
- Python3.6.9
- CUDA 10.2
- cuDNN 8.0
- TensorRT 7.2.1
- torch2trt 0.1.0
I met the same bug. Did you find out the solution?
I hit the same error when going through the example notebook at trt_pose (https://github.com/NVIDIA-AI-IOT/trt_pose/blob/master/tasks/human_pose/live_demo.ipynb)
I also tried your program and got the same issue. I'm on Ubuntu 18.04, python 3.6.9, CUDA 11.1 and TensorRT 7.2.3
Did anyone find a solution?
I met the same bug
@neoyang0620 @abaumhof @ygy12345678 FYI, Torch-TensorRT(https://github.com/NVIDIA/Torch-TensorRT) might be a better way to convert PyTorch models to TensorRT, or simply insert a dummy layer between them to avoid this issue.
想问一下后面这个问题是怎么解决的啊(卡好久了