torch2trt
torch2trt copied to clipboard
Torch2trt broken for latest L4T 32.4.3 release? Seeing "add_constant incompatible" errors.
Hi,
I upgraded L4T versions from 32.2.1 to the latest release 32.4.3, which has me upgrading from PyTorch 1.3 to PyTorch 1.6.0 and TensorRT 6 to TensorRT 7.1.3. I believe I also upgraded from CUDA 10.0 to CUDA 10.2.
It looks like torch2trt is now broken. After upgrading, I'm now receiving this warning:
WARNING: Unsupported numpy data type. Cannot implicitly convert to tensorrt.Weights.
Followed by this error:
TypeError: add_constant(): incompatible function arguments. The following argument types are supported:
1. (self: tensorrt.tensorrt.INetworkDefinition, shape: tensorrt.tensorrt.Dims, weights: tensorrt.tensorrt.Weights) -> tensorrt.tensorrt.IConstantLayer
Invoked with: <tensorrt.tensorrt.INetworkDefinition object at 0x7f1c56e1b8>, (), array(17287)
As per issue #313, I tried the fix in this comment. The issue disappears but the problem then arises in other areas of the code.
This same conversion using the same PyTorch model works in L4T 32.2.1 (PyTorch 1.3 and TensorRT 6).
You can convert your pytorch YOLO model before the three YOLO layers, because this series of problems come from them.
Same issue
same issue
any Solution found guys ?
same here, any solution?
You can convert your pytorch YOLO model before the three YOLO layers, because this series of problems come from them.
do you mean only accelerate the backbone of yolo with tensorrt while only keeping single output?
try
model.eval().cuda()
instead of model.cuda()