TensorRT
TensorRT copied to clipboard
🐛 [Bug] Could not implicitly convert NumPy data type: i64 to TensorRT
Bug Description
TensorRT engine produces error when ran on Jetson for fcn_resnet model. However, it does not produce error when ran on desktop.
Dynamo frontend is used for creating a TensorRT engine.
Error : [TRT] [E] Could not implicitly convert NumPy data type: i64 to TensorRT.
To Reproduce
Steps to reproduce the behavior:
The following are relevant code for loading and converting to a TensorRT model.
input_data = torch.randn(args.input_shape, device=DEVICE)
model = torch.hub.load("pytorch/vision", 'fcn_resnet50', pretrained=True)
model.eval().to(DEVICE)
input_data = input_data.to(torch.float16)
model = model.to(torch.float16)
exp_program = torch.export.export(model, tuple([input_data]))
model = torch_tensorrt.dynamo.compile(
exported_program=exp_program,
inputs=[input_data],
min_block_size=args.min_block_size,
optimization_level=args.optimization_level,
enabled_precisions={dtype},
# Set to True for verbose output
# NOTE: Performance Regression when rich library is available
# https://github.com/pytorch/TensorRT/issues/3215
debug=True,
# Setting it to True returns PythonTorchTensorRTModule which has different profiling approach
use_python_runtime=True,
)
for _ in range(100):
_ = model(input)
Expected behavior
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
Jetson Orion Developer Kit
- Torch-TensorRT Version (e.g. 1.0.0): 2.4.0a0
- PyTorch Version (e.g. 1.0):
- CPU Architecture: aarch64
- OS (e.g., Linux): Ubuntu 22.04
- How you installed PyTorch (
conda,pip,libtorch, source):nvcr.io/nvidia/pytorch:24.06-py3-igpu - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
nvcr.io/nvidia/pytorch:24.06-py3-igpu - Python version: 3.10.12
- CUDA version: 12.6.68
- GPU models and configuration:
- Any other relevant information: Jetpack 6.1 L4T 36.4.0
Additional context
Here's a screenshot for relevant comparison
Desktop:
Jetson:
What version of TensorRT are you using on jetson vs x86?
What version of TensorRT are you using on jetson vs x86?
On jetson I am using 24.06 PyTorch igpu image that comes with TensorRT 10.1.0.27
On dekstop, the TensorRT version 10.1.0 is used.
I think this issue might be related to this PR : https://github.com/pytorch/TensorRT/pull/3258. I am facing the same error on Jetson.
Side Note: (On Desktop) I am facing same issue as shown in this issue with GoogLeNet model: https://github.com/pytorch/TensorRT/issues/3185
I guess not all models are supported via Torch-TensorRT library yet!
@apbose can you look at this bug?
Overall my inclination is that there is version mismatch somewhere since this passes on x86 and there isn't any aarch64 specific behavior in torchtrt
Also you can try out the nightly or latest stable version instead of 2.4.0a0