onnx-tensorrt icon indicating copy to clipboard operation
onnx-tensorrt copied to clipboard

onnx2trt error

Open taxuezcy opened this issue 2 years ago • 13 comments

Description

Environment

TensorRT Version: 8.2.1.8 ONNX-TensorRT Version / Branch:8.2EA GPU Type: Nvidia Driver Version: 440.33.01 CUDA Version: 10.2 CUDNN Version: Operating System + Version: ubuntu 16 Python Version (if applicable): 3.6 TensorFlow + TF2ONNX Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):

the error is Input filename: model_0039999_sim.onnx ONNX IR version: 0.0.7 Opset version: 13 Producer name: pytorch Producer version: 1.10 Domain:
Model version: 0 Doc string:

Parsing model [2022-05-09 07:42:01 WARNING] onnx2trt_utils.cpp:370: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. While parsing node number 205 [Range -> "664"]: ERROR: builtin_op_importers.cpp:3270 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

taxuezcy avatar May 09 '22 07:05 taxuezcy

@taxuezcy Thank you for reporting this issue! As the log shown, your case is not supported in TensorRT 8.2. Could you please try the latest release - TensorRT 8.4 EA?

zhenhuaw-me avatar May 13 '22 01:05 zhenhuaw-me

@taxuezcy Hi i have face the same problem, have you try the TensorRT 8.4EA as suggested ? Does the new version of tensorRt could solve this problem , thanks

lileilai avatar May 20 '22 02:05 lileilai

@lileilai ,@jackwish , i have tried ,TensorRT 8.4 EA not work!

taxuezcy avatar May 23 '22 07:05 taxuezcy

~~Glad to hear that! As the issue is resolved, I am going to close it. Please let me know if any further issues.~~

@taxuezcy could you help to add "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" to the title? It might be helpful for others to locate this issue.

zhenhuaw-me avatar May 23 '22 08:05 zhenhuaw-me

Glad to hear that! As the issue is resolved, I am going to close it. Please let me know if any further issues.

@taxuezcy could you help to add "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" to the title? It might be helpful for others to locate this issue.

@zhenhuaw-me Hello, how did you solve this problem: Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

wafaer avatar Feb 25 '23 08:02 wafaer

Reopen since I may misread @taxuezcy 's comment. Sorry for that. @taxuezcy Do you still see this issue?

zhenhuaw-me avatar Mar 08 '23 03:03 zhenhuaw-me

@wafaer The error tips an unsupported ONNX op spec. What's the operator the assertion is processing?

zhenhuaw-me avatar Mar 08 '23 03:03 zhenhuaw-me

I have already resolved it. Upgrade Tensorrt to version TensorRT 8.4 GA Update 1. 我已经解决了。tensorrt升到版本TensorRT 8.4 GA Update 1就可以了。

zoufangyu1987 avatar Apr 19 '23 02:04 zoufangyu1987

I have already resolved it. Upgrade Tensorrt to version TensorRT 8.4 GA Update 1. 我已经解决了。tensorrt升到版本TensorRT 8.4 GA Update 1就可以了。

zoufangyu1987 avatar Apr 19 '23 02:04 zoufangyu1987

I have already resolved it. Upgrade Tensorrt to version TensorRT 8.4 GA Update 1. 我已经解决了。tensorrt升到版本TensorRT 8.4 GA Update 1就可以了。

TensorRT不是应该和cuda版本对应吗,升级后是不是也应该把cuda版本升级啊大佬

yth209 avatar May 14 '23 06:05 yth209

@yth209 我下载的tensorrt版本为下面这个,我的cuda为11.1,也是能兼容的,貌似没有那么要求严格. TensorRT-8.4.2.4.Linux.x86_64-gnu.cuda-11.6.cudnn8.4.tar.gz

zoufangyu1987 avatar May 16 '23 02:05 zoufangyu1987

@yth209 @zoufangyu1987 Starting 8.0, TensorRT is compatibile within the CUDA minor verion, e.g. when you download a build for cuda 11.6, it's compatible with CUDA 11.1-11.8 .

For more information, check:

  • TensorRT Compatibility section in the release note: https://docs.nvidia.com/deeplearning/tensorrt/release-notes/index.html#rel-8-6-0-EA:~:text=of%20deprecated%20features.-,Compatibility,-TensorRT%208.6.0%20has
  • CUDA minor version compatibility: https://docs.nvidia.com/deploy/cuda-compatibility/index.html#default-to-minor-version

zhenhuaw-me avatar May 16 '23 03:05 zhenhuaw-me

@yth209 @zoufangyu1987 Starting 8.0, TensorRT is compatibile within the CUDA minor verion, e.g. when you download a build for cuda 11.6, it's compatible with CUDA 11.1-11.8 .

For more information, check:

  • TensorRT Compatibility section in the release note: https://docs.nvidia.com/deeplearning/tensorrt/release-notes/index.html#rel-8-6-0-EA:~:text=of%20deprecated%20features.-,Compatibility,-TensorRT%208.6.0%20has
  • CUDA minor version compatibility: https://docs.nvidia.com/deploy/cuda-compatibility/index.html#default-to-minor-version Thank you, I have resolved this issue. By using a lower version of YOLOv5 to convert to ONNX, the tensorrt version error is no longer occurring. It seems to be a version mismatch problem. However, I encountered the following error while using ONNX to convert to TRT file: [ERROR] (Unnamed Layer* 0) [slice]: slice is out of input range [2023-05-14 09:02:05while parsing node number 9 [slice ·> "onnx::Concat 227"]:ERROR: /home/yth/onnx-tensorrt-6.0/onnx2trt utils,hpp:412 In function convert axis:[8] Assertion failed: axis >= 0 && axis < nbDims. How can I solve this problem?

yth209 avatar May 16 '23 13:05 yth209