TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

Floating point exception of TensorRT 10.5 when converting an ONNX model with trtexec on GPU GeForce RTX 2080 Ti

Open tom-brosch opened this issue 8 months ago • 6 comments

Description

I tried to convert an ONNX model of a U-Net to TensorRT using the trtexec command line tool.

Environment

TensorRT Version: 10.5

NVIDIA GPU: GeForce RTX 2080 Ti

NVIDIA Driver Version: 525.60.13

CUDA Version: 12.0

CUDNN Version: 9.6.0.74

Operating System: OpenSUSE

Baremetal or Container (if so, version): With TensorRT container (nvcr.io/nvidia/tensorrt:24.10-py3)

Relevant Files

Log file: logs.txt

Model link: model_cv1-4.zip

Steps To Reproduce

Commands or scripts:

  • Start the docker container: nvidia-docker run -it --rm -v $PWD:/models nvcr.io/nvidia/tensorrt:24.10-py3
  • Start the conversion: trtexec --onnx=/models/model_cv1-4.onnx --saveEngine=/models/model_cv1-4_trt.engine --minShapes=input:1x1x16x16x16 --optShapes=input:1x1x96x120x96 --maxShapes=input:1x1x96x120x96

Have you tried the latest release?: I also tried TensorRT 10.7 with the nvcr.io/nvidia/tensorrt:24.12-py3 image and it gives the same error.

Can this model run on other frameworks? I have the same issue when running the model through the TensorRT EP of ONNX Runtime.

tom-brosch avatar Feb 11 '25 12:02 tom-brosch