onnx-tensorrt
onnx-tensorrt copied to clipboard
ONNX model from TensorFlow failed to convert to TensorRT
I tried to convert a onnx model to tensortRT in a Jetson Xavier AGX.
I run this command
/usr/src/tensorrt/bin/trtexec --onnx=segmentation_model.onnx
And i got this result:
&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=segmentation_model.onnx [07/01/2022-13:17:54] [I] === Model Options === [07/01/2022-13:17:54] [I] Format: ONNX [07/01/2022-13:17:54] [I] Model: segmentation_model.onnx [07/01/2022-13:17:54] [I] Output: [07/01/2022-13:17:54] [I] === Build Options === [07/01/2022-13:17:54] [I] Max batch: 1 [07/01/2022-13:17:54] [I] Workspace: 16 MB [07/01/2022-13:17:54] [I] minTiming: 1 [07/01/2022-13:17:54] [I] avgTiming: 8 [07/01/2022-13:17:54] [I] Precision: FP32 [07/01/2022-13:17:54] [I] Calibration: [07/01/2022-13:17:54] [I] Safe mode: Disabled [07/01/2022-13:17:54] [I] Save engine: [07/01/2022-13:17:54] [I] Load engine: [07/01/2022-13:17:54] [I] Builder Cache: Enabled [07/01/2022-13:17:54] [I] NVTX verbosity: 0 [07/01/2022-13:17:54] [I] Inputs format: fp32:CHW [07/01/2022-13:17:54] [I] Outputs format: fp32:CHW [07/01/2022-13:17:54] [I] Input build shapes: model [07/01/2022-13:17:54] [I] Input calibration shapes: model [07/01/2022-13:17:54] [I] === System Options === [07/01/2022-13:17:54] [I] Device: 0 [07/01/2022-13:17:54] [I] DLACore: [07/01/2022-13:17:54] [I] Plugins: [07/01/2022-13:17:54] [I] === Inference Options === [07/01/2022-13:17:54] [I] Batch: 1 [07/01/2022-13:17:54] [I] Input inference shapes: model [07/01/2022-13:17:54] [I] Iterations: 10 [07/01/2022-13:17:54] [I] Duration: 3s (+ 200ms warm up) [07/01/2022-13:17:54] [I] Sleep time: 0ms [07/01/2022-13:17:54] [I] Streams: 1 [07/01/2022-13:17:54] [I] ExposeDMA: Disabled [07/01/2022-13:17:54] [I] Spin-wait: Disabled [07/01/2022-13:17:54] [I] Multithreading: Disabled [07/01/2022-13:17:54] [I] CUDA Graph: Disabled [07/01/2022-13:17:54] [I] Skip inference: Disabled [07/01/2022-13:17:54] [I] Inputs: [07/01/2022-13:17:54] [I] === Reporting Options === [07/01/2022-13:17:54] [I] Verbose: Disabled [07/01/2022-13:17:54] [I] Averages: 10 inferences [07/01/2022-13:17:54] [I] Percentile: 99 [07/01/2022-13:17:54] [I] Dump output: Disabled [07/01/2022-13:17:54] [I] Profile: Disabled [07/01/2022-13:17:54] [I] Export timing to JSON file: [07/01/2022-13:17:54] [I] Export output to JSON file: [07/01/2022-13:17:54] [I] Export profile to JSON file: [07/01/2022-13:17:54] [I]
Input filename: segmentation_model.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.11.0 89c4c5
Domain:
Model version: 0
Doc string:
[07/01/2022-13:17:57] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. ERROR: builtin_op_importers.cpp:2523 In function importResize: [8] Assertion failed: (mode != "nearest" || nearest_mode == "floor") && "This version of TensorRT only supports floor nearest_mode!" [07/01/2022-13:17:57] [E] Failed to parse onnx file [07/01/2022-13:17:57] [E] Parsing model failed [07/01/2022-13:17:57] [E] Engine creation failed [07/01/2022-13:17:57] [E] Engine set up failed &&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=segmentation_model.onnx
I'm not sure what can i do, but i'll be pending if more information is needed. bug_report.txt
@MarceJara Which TRT version did you use? Could you try TRT 8.4.1?
This is an error that occurs with older TensorRT versions. As @nvpohanh mentioned, can you try upgrading to TensorRT 8.4.1 and importing your model again?