onnx-tensorrt icon indicating copy to clipboard operation
onnx-tensorrt copied to clipboard

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).

Open uRENu opened this issue 4 years ago • 3 comments

Description

I want to convert pb model in tensorflow to tensorRT, my steps:

  1. python -m tf2onnx.convert --graphdef /root/NER/check2PB/test/1/saved_model.pb --inputs=real_inp:0,real_inp_bound:0,real_inp_part:0,length_input:0 --outputs=import/embedding/crf/cond_2/Merge:0 --opset=13 --output /root/NER/check2PB/uff/model.onnx
  2. onnx2trt /root/NER/check2PB/uff/model.onnx -t /root/NER/check2PB/uff/model.trt

However, the following error occurs in step 2: Input filename: /root/WRZ_test/NER/check2PB/uff/model.onnx
ONNX IR version: 0.0.5 Opset version: 10 Producer name: tf2onnx Producer version: 1.9.1 Domain: Model version: 0 Doc string: WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3). Writing ONNX model (without weights) as text to /root/WRZ_test/NER/check2PB/uff/model.trt Parsing model ERROR: real_inp:0_TRT_DYNAMIC_SHAPES:58 In function importInput:

Environment

TensorRT Version: TensorRT 5.1.5 ONNX-TensorRT Version / Branch: git clone -b 5.1 --recurse-submodules https://github.com.cnpmjs.org/onnx/onnx-tensorrt.git GPU Type: Tesla T4 Nvidia Driver Version: Driver Version: 410.129 CUDA Version: 10.0.130 CUDNN Version: None Operating System + Version: ubuntu 16.04 Python Version (if applicable): python3.7.7 TensorFlow + TF2ONNX Version (if applicable): tensorflow-gpu-1.15.0 tf2onnx-1.9.1 PyTorch Version (if applicable): None Baremetal or Container (if container which image + tag):

Relevant Files

Steps To Reproduce

uRENu avatar Jul 30 '21 01:07 uRENu

Can you try using a later TensorRT version? TRT 5.1 is not being actively supported anymore.

kevinch-nv avatar Aug 02 '21 18:08 kevinch-nv

Can you try using a later TensorRT version? TRT 5.1 is not being actively supported anymore. I have upgraded TensorRT to 7.0.0.
now the following error occurs in step 2:
Input filename: /root/WRZ_test/NER/check2PB/uff/model.onnx ONNX IR version: 0.0.6 Opset version: 11 Producer name: tf2onnx Producer version: 1.9.1 Domain: Model version: 0 Doc string:
Writing ONNX model (without weights) as text to /root/WRZ_test/NER/check2PB/uff/model.trt Parsing model [2021-08-09 10:45:50 WARNING] /root/drivers/onnx-tensorrt/onnx2trt_utils.cpp:235: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [2021-08-09 10:45:50 WARNING] /root/drivers/onnx-tensorrt/onnx2trt_utils.cpp:261: One or more weights outside the range of INT32 was clamped While parsing node number 22 [ReverseSequence -> "import/embedding/bilstm/bidirectional_rnn/bw/ReverseV2_ReverseSequence__168:0"]: ERROR: /root/drivers/onnx-tensorrt/ModelImporter.cpp:134 In function parseGraph: [8] No importer registered for op: ReverseSequence

uRENu avatar Aug 09 '21 10:08 uRENu

ReverseSequence has been supported since TRT 8.0, can try importing with the new TRT version?

kevinch-nv avatar Jun 16 '22 19:06 kevinch-nv