onnx-tensorrt icon indicating copy to clipboard operation
onnx-tensorrt copied to clipboard

Loading onnx file failed: Parameter check failed. condition: nbInputs > 0 && nbInputs < MAX_CONCAT_INPUTS

Open chee0007 opened this issue 3 years ago • 1 comments

Description

Converted pb to onnx file successfully with opcode 13

When trying to load onnx file into Tensor RT, it returns some errors:

[04/25/2022-16:22:49] [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/25/2022-16:22:49] [E] Error[3]: [network.cpp::addConcatenation::670] Error Code 3: Internal Error (Parameter check failed at: optimizer/api/network.cpp::addConcatenation::670, condition: nbInputs > 0 && nbInputs < MAX_CONCAT_INPUTS
)
[04/25/2022-16:22:49] [E] [TRT] ModelImporter.cpp:723: --- End node ---
[04/25/2022-16:22:49] [E] [TRT] ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:443 In function importConcat:
[8] Assertion failed: layer && "Failed to register layer."
[04/25/2022-16:22:49] [E] Failed to parse onnx file
[04/25/2022-16:22:49] [I] Finish parsing network model
[04/25/2022-16:22:49] [E] Parsing model failed
[04/25/2022-16:22:49] [E] Engine creation failed
[04/25/2022-16:22:49] [E] Engine set up failed

Environment

TensorRT Version: 8.0.3.4 ONNX-TensorRT Version / Branch: GPU Type: RTX2080Ti Nvidia Driver Version: 470.103.01 CUDA Version: 11.2 CUDNN Version: 8.2.0.53 Operating System + Version: Ubuntu 18.04 Python Version (if applicable): TensorFlow + TF2ONNX Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):

Relevant Files

PB FILE ONNX FILE

Steps To Reproduce

/usr/local/TensorRT-8.0.3.4/bin/trtexec --onnx=~/250422.onnx

chee0007 avatar Apr 25 '22 09:04 chee0007

I've checked your model, and it looks like there's a concat node named StatefulPartitionedCall/model_20/model_4/bilinear_interpolation_4/concat_1 that contains 32768 concat inputs, that is higher than TensorRT's internal concat limit.

What is the usecase of this concat? This can probably be folded down in the source model.

kevinch-nv avatar May 12 '22 22:05 kevinch-nv