TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry

Open jylink opened this issue 2 years ago • 15 comments

Description

Hi, I tried to convert onnx to trt on Jetson NX (jetpack 4.6, trt 8.2.1, cuda 10.2) but got an Internal Error, I googled but cannot find any clue about this error message.

FYI, this onnx can be successfully converted to trt on my Jetson Nano (jetpack 4.5, trt 7.1.3, cuda 10.2) and Windows PC (trt 8.2.1, cuda 11.0)

trt version 8.2.1.8

[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped

[03/18/2022-16:54:18] [TRT] [E] 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)

Traceback (most recent call last):

  File "tools/export_trt.py", line 77, in <module>

    f.write(engine.serialize())

AttributeError: 'NoneType' object has no attribute 'serialize'

Environment

TensorRT Version: 8.2.1.8 NVIDIA GPU: Jetson NX (jetpack 4.6) NVIDIA Driver Version: CUDA Version: 10.2 CUDNN Version: 8.2.1 Operating System: Python Version (if applicable): Tensorflow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if so, version):

jylink avatar Mar 18 '22 09:03 jylink

Hello @jylink , seems a bug in trt. could you provide us with the onnx model for debug? thanks!

ttyio avatar Apr 07 '22 12:04 ttyio

Hello @jylink , seems a bug in trt. could you provide us with the onnx model for debug? thanks!

https://github.com/jylink/tmp/blob/main/ace-8-best.onnx

jylink avatar Apr 08 '22 01:04 jylink

Thanks @jylink , the fix will be available in next Jetpack release.

ttyio avatar Apr 08 '22 07:04 ttyio

Thanks @jylink , the fix will be available in next Jetpack release.

So,which Jetpack release could solve this bug?I got the same problem with the Author, and my jetsonNX jetpack is 4.6.1.

zsw360720347 avatar Jun 23 '22 06:06 zsw360720347

Description

Hi, I tried to convert onnx to trt on Jetson NX (jetpack 4.6, trt 8.2.1, cuda 10.2) but got an Internal Error, I googled but cannot find any clue about this error message.

FYI, this onnx can be successfully converted to trt on my Jetson Nano (jetpack 4.5, trt 7.1.3, cuda 10.2) and Windows PC (trt 8.2.1, cuda 11.0)

trt version 8.2.1.8

[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped

[03/18/2022-16:54:18] [TRT] [E] 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)

Traceback (most recent call last):

  File "tools/export_trt.py", line 77, in <module>

    f.write(engine.serialize())

AttributeError: 'NoneType' object has no attribute 'serialize'

Environment

TensorRT Version: 8.2.1.8 NVIDIA GPU: Jetson NX (jetpack 4.6) NVIDIA Driver Version: CUDA Version: 10.2 CUDNN Version: 8.2.1 Operating System: Python Version (if applicable): Tensorflow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if so, version):

Hi, Have you solve the problem?

zsw360720347 avatar Jun 23 '22 07:06 zsw360720347

Same error when running official onnx model: /usr/src/tensorrt/bin/trtexec --onnx=./data/resnet50/ResNet50.onnx

[utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 32}, 25535},)

Kailthen avatar Jun 24 '22 05:06 Kailthen

This should have been fixed in TRT 8.4 GA.

nvpohanh avatar Jun 24 '22 06:06 nvpohanh

TRT8.4 not support jetpck4.6.1? So I still need to update jetpack? Mine is 4.6.1 now.

https://github.com/NVIDIA/TensorRT/tree/release/8.4#prerequisites

zsw360720347 avatar Jun 24 '22 06:06 zsw360720347

TRT 8.4 should be in Jetpack 5.0, which will release soon.

zerollzeng avatar Jun 24 '22 07:06 zerollzeng

Is there a workaround for this problem on Jetpack 4.6.1? I'm having the exact same issue, and migrating the entire project to ubuntu 20.04 just for trt is really not an option.

Zephyr69 avatar Aug 23 '22 03:08 Zephyr69

Building tensorrt from source resulted in the same issue too.

Zephyr69 avatar Aug 24 '22 07:08 Zephyr69

This is indeed a bug, I think upgrading to Jetpack 5.0 is the only option.

zerollzeng avatar Aug 24 '22 16:08 zerollzeng

@zerollzeng Is there some way to downgrade to Jetpack 4.5.x?

Zephyr69 avatar Aug 24 '22 23:08 Zephyr69

reflash it? but I think you will have this issue in 4.5 too. and perhaps there might be even no JP4.5 version for this devce.

zerollzeng avatar Aug 25 '22 14:08 zerollzeng

@jylink On my NX 8gb ram EMMC module with Jetpack 4.6 and Tensorrt 8.0.1.6, it works fine. But on the 16bg ram module with the same software config as yours, it doesn't work. I haven't tried to downgrade the 16gb module tho.

Which NX module is yours?

Zephyr69 avatar Aug 27 '22 02:08 Zephyr69

Jetson NX (jetpack 4.6.2) TensorRT :8.2.1.8 CUDA :10.2 CUDNN :8.2.1 Same error when running official onnx model: /usr/src/tensorrt/bin/trtexec --onnx=./data/resnet50/ResNet50.onnx

[utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12660},)

whaosoft avatar Oct 14 '22 11:10 whaosoft

@jylink On my NX 8gb ram EMMC module with Jetpack 4.6 and Tensorrt 8.0.1.6, it works fine. But on the 16bg ram module with the same software config as yours, it doesn't work. I haven't tried to downgrade the 16gb module tho.

Which NX module is yours?

Were you able to resolve this issue? I am having the same problem.

rida-xavor avatar Oct 23 '22 16:10 rida-xavor

@jylink On my NX 8gb ram EMMC module with Jetpack 4.6 and Tensorrt 8.0.1.6, it works fine. But on the 16bg ram module with the same software config as yours, it doesn't work. I haven't tried to downgrade the 16gb module tho. Which NX module is yours?

Were you able to resolve this issue? I am having the same problem.

No, it turned out the 16gb ram module cannot be downgraded. Since there seems to be no more support on this, you must choose the 8gb ram module and not the 16gb ram module if you are sticking to ubuntu 18.04.

Zephyr69 avatar Oct 23 '22 23:10 Zephyr69

Hi guys, we just release the TensorRT_8.2.1.9_Patch_for_Jetpack4.6_Jetson_NX_16GB.tar.gz for this issue, please see https://developer.nvidia.com/embedded/linux-tegra-r3272

zerollzeng avatar Nov 08 '22 02:11 zerollzeng

@zerollzeng For Jetson NX 16GB, Jetpack4.6.1, TensorRT8.2.1.8,how can I solve the problems? Must I upgrade to Jepack5.0? I need your help. Thanks very much.

Audrey528 avatar Nov 12 '22 07:11 Audrey528

Just replace the TRT with the above package, there is also a readme on how to install it, please also uninstall the pre-installed one first.

zerollzeng avatar Nov 12 '22 16:11 zerollzeng

closing since no activity for more than 3 weeks, please reopen if you still have question, thanks!

ttyio avatar Dec 06 '22 01:12 ttyio

Hi everyone,

I'm having the same issue with a newer configuration:

Hardware configuration Jetson: Orin AGX 32Gb Board: Custom Board MIC-733-AO GPU: 1792-core NVIDIA Ampere GPU with 56 Tensor

Software configuration JetPack 5.1 (R35 (release), REVISION: 2.1) TensorRT 8.5.2.2-1+cuda11.4 Torch 2.1.0a0+41361538.nv23.6

Issue export_trt_issue

Explanation I have trained detection .pt weights tested without TensorRT and working. Next, i have converted these weights to ONNX model without issues. And when i try to convert the ONNX model to a TRT engine, i get this error.

Would be grateful to your help !

mzacri avatar Apr 04 '24 12:04 mzacri

@mzacri Could you please try latest JP?

zerollzeng avatar Apr 06 '24 17:04 zerollzeng

Hi @zerollzeng,

Thanks for your response. I will give it a shot and come back to you with results.

Regards

mzacri avatar Apr 10 '24 09:04 mzacri