tensorrt_inference icon indicating copy to clipboard operation
tensorrt_inference copied to clipboard

[E] [TRT] Network must have at least one output

Open DataXujing opened this issue 3 years ago • 26 comments

按照ScaledYOLOv4的配置方式成功后,运行YOLO v4-p7出现如题所示的错误!

DataXujing avatar Dec 08 '20 07:12 DataXujing

@DataXujing 请确认一下config.yaml文件中的ONNX文件路径对不对,写个绝对路径试试。

linghu8812 avatar Dec 08 '20 07:12 linghu8812

myuser@ubuntu:~/xujing/tensorRT_apply/scaleyolov4/build$ ./ScaledYOLOv4_trt ../config-p7.yaml ../samples/
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 1144532591
----------------------------------------------------------------
Input filename:   /home/myuser/xujing/tensorRT_apply/scaleyolov4/best.onnx
ONNX IR version:  0.0.6
Opset version:    12
Producer name:    pytorch
Producer version: 1.6
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 1144532591
ERROR: ModelImporter.cpp:92 In function parseGraph:
[8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx)
[12/08/2020-15:39:42] [E] Failure while parsing ONNX file
start building engine
[12/08/2020-15:39:42] [E] [TRT] Network must have at least one output
[12/08/2020-15:39:42] [E] [TRT] Network validation failed.
build engine done
ScaledYOLOv4_trt: /home/myuser/xujing/tensorRT_apply/scaleyolov4/./includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed.
Aborted (core dumped)

大佬,依然出现这个错误

DataXujing avatar Dec 08 '20 07:12 DataXujing

@DataXujing 请问一下用的是哪个版本的TensorRT?请试一下这个onnx:https://pan.baidu.com/s/1Sp-sOT_mYYVXgXE9uN_ShQ,提取码:hytj

linghu8812 avatar Dec 08 '20 08:12 linghu8812

我使用的是tensorRT7,0.0.11.

DataXujing avatar Dec 09 '20 01:12 DataXujing

@DataXujing https://github.com/linghu8812/tensorrt_inference/blob/bdb3ac319e634f162984a32c93636ab50ae55a2f/ScaledYOLOv4/export_onnx.py#L55

导出ONNX模型时,试一试opset=10,7,0.0.11不支持opset=12

linghu8812 avatar Dec 09 '20 01:12 linghu8812

@linghu8812 the same error in yolov5s.onnx, any suggestions?

sporterman avatar Dec 15 '20 08:12 sporterman

导出ONNX模型时,试一试opset=10,7,0.0.11及以下不支持opset=12

linghu8812 avatar Dec 15 '20 15:12 linghu8812

@linghu8812 试过了 好像还是没效果

sporterman avatar Dec 16 '20 01:12 sporterman

@sporterman 试一下下面代码测试yolov5s.onnx是否正确

import onnxruntime
import numpy as np

sess_options = onnxruntime.SessionOptions()
sess = onnxruntime.InferenceSession('./yolov5s.onnx', sess_options)
data = [np.random.rand(1, 3, 640, 640).astype(np.float32)]
input_names = sess.get_inputs()
feed = zip(sorted(i_.name for i_ in input_names), data)
result = sess.run(None, dict(feed))
print(result[0].shape)

输出结果应为

(1, 25200, 85)

linghu8812 avatar Dec 16 '20 02:12 linghu8812

@linghu8812 试过了 我弄错文件了 应该是export_onnx文件,但是一直复制的是文件开头的python export.py 导致出来的onnx文件是不对的 ,感谢大佬的纠正

sporterman avatar Dec 16 '20 07:12 sporterman

@linghu8812 你好, 同样的错误出现在retinaface(mxnet)下转出的onnx模型做解析的时候, 想问一下有什么思路嘛

IGnoredBird avatar Jan 27 '21 02:01 IGnoredBird

@sporterman 试一下下面代码测试yolov5s.onnx是否正确

import onnxruntime
import numpy as np

sess_options = onnxruntime.SessionOptions()
sess = onnxruntime.InferenceSession('./yolov5s.onnx', sess_options)
data = [np.random.rand(1, 3, 640, 640).astype(np.float32)]
input_names = sess.get_inputs()
feed = zip(sorted(i_.name for i_ in input_names), data)
result = sess.run(None, dict(feed))
print(result[0].shape)

输出结果应为

(1, 25200, 85)

@IGnoredBird 按着这个方法测试onnx文件的输出是否正确

linghu8812 avatar Jan 27 '21 02:01 linghu8812

Traceback (most recent call last): File "test.py", line 1349, in result = sess.run(None, dict(feed)) File "/harddisk/anaconda3/envs/flask/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 124, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Resize node. Name:'ssh_c3_up' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/upsample.cc:1036 onnxruntime::common::Status onnxruntime::Upsample<T>::Compute(onnxruntime::OpKernelContext*) const [with T = float] sizes != nullptr && sizes->Shape().Size() != 0 was false. Either scales or sizes MUST be provided as input.

IGnoredBird avatar Jan 27 '21 03:01 IGnoredBird

@linghu8812 模型是用export_onnx.py转的, 没有报错. 跑刚刚那个oonx的测试出错了,如上.

IGnoredBird avatar Jan 27 '21 03:01 IGnoredBird

@IGnoredBird which tensorrt and onnx versions are you using? Also, while exporting, what is the opset value did you set?

bobbilichandu avatar Jan 27 '21 05:01 bobbilichandu

@chandu1263 tensorrt 7.1.3.4 . onnx 1.8.0. when i test the onnx model, I get the message : 2021-01-27 13:56:34.505019382 [W:onnxruntime:Default, upsample.h:73 UpsampleBase] tf_half_pixel_for_nn is deprecated since opset 13, yet this opset 13 model uses the deprecated attribute. but i have not found how to set opset now. the export_onnx.py script uses mxnet.contrib's api export_model to export onnx model. thanks

IGnoredBird avatar Jan 27 '21 06:01 IGnoredBird

@IGnoredBird 与pytorch不同,修改mxnet导出的onnx模型opset,需要修改mxnet contrib内部的代码,试试使用onnx==1.5.0

linghu8812 avatar Jan 27 '21 06:01 linghu8812

@chandu1263 @linghu8812 thanks , problem solved with onnx==1.5.0

IGnoredBird avatar Jan 27 '21 06:01 IGnoredBird

tensorrt:7.0.0.11,onnx 1.5.0和1.6.0都试过 都不行,用onnx runtime能够得到输出结果,报错信息如下:


Input filename: ../yolov4-p5.onnx ONNX IR version: 0.0.6 Opset version: 11 Producer name: pytorch Producer version: 1.6 Domain:
Model version: 0 Doc string:

ERROR: ModelImporter.cpp:92 In function parseGraph: [8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx) [02/26/2021-10:04:39] [E] Failure while parsing ONNX file start building engine [02/26/2021-10:04:39] [E] [TRT] Network must have at least one output [02/26/2021-10:04:39] [E] [TRT] Network validation failed. build engine done ScaledYOLOv4_trt: /home/work/deep_learning/detection/inference/tensorrt_inference/ScaledYOLOv4/../includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed. 已放弃 (核心已转储)

@linghu8812

deep-practice avatar Feb 26 '21 02:02 deep-practice

@deep-practice 7.0.0.11 use opset=10 when export onnx models

linghu8812 avatar Feb 27 '21 01:02 linghu8812

@deep-practice 7.0.0.11 use opset=10 when export onnx models

onnx版本有什么要求吗?

DaChaoXc avatar Mar 10 '21 13:03 DaChaoXc

导出ONNX模型时,试一试opset=10,7,0.0.11及以下不支持opset=12

请问tensorrt版本和支持的Opset具体在哪里看呢?

DaChaoXc avatar Mar 10 '21 14:03 DaChaoXc

@DaChaoXc TensorRT 7.1以上支持opset12,6.0和7.0支持opset10,高版本的onnx可以支持低opset

linghu8812 avatar Mar 11 '21 00:03 linghu8812

@deep-practice 7.0.0.11 use opset=10 when export onnx models

is right!!

DaChaoXc avatar Mar 16 '21 09:03 DaChaoXc

@sporterman try the following code to test whether yolov5s.onnx is correct

import onnxruntime
import numpy as np

sess_options = onnxruntime.SessionOptions()
sess = onnxruntime.InferenceSession('./yolov5s.onnx', sess_options)
data = [np.random.rand(1, 3, 640, 640).astype(np.float32)]
input_names = sess.get_inputs()
feed = zip(sorted(i_.name for i_ in input_names), data)
result = sess.run(None, dict(feed))
print(result[0].shape)

The output result should be

(1, 25200, 85)

I have created the ONNX file using the export_onnx.py and even tested the output using this and it is correct and yet I still receive the error ````engine->getNbBindings() == 2' failed```. I am using a Jetson Nano device and trying on yolov5s any advice?

abuelgasimsaadeldin avatar May 28 '21 15:05 abuelgasimsaadeldin

./mmpose_trt ../../../configs/mmpose/config.yaml ../../../samples/pedestrian

Input filename: ../hrnet_w48_coco_256x192.onnx ONNX IR version: 0.0.6 Opset version: 11 Producer name: pytorch Producer version: 1.7 Domain:
Model version: 0 Doc string:

While parsing node number 81 [Resize]: ERROR: ModelImporter.cpp:124 In function parseGraph: [5] Assertion failed: ctx->tensors().count(inputName) [08/02/2022-21:57:55] [E] Failure while parsing ONNX file start building engine [08/02/2022-21:57:55] [E] [TRT] Network must have at least one output [08/02/2022-21:57:55] [E] [TRT] Network validation failed. build engine done mmpose_trt: /home/pcb/Algorithm/tensorrt_inference/code/src/model.cpp:46: void Model::OnnxToTRTModel(): Assertion `engine' failed. 已放弃 (核心已转储)

我尝试使用opset=10,但是mmpose显示只支持opset 11 python3 tools/deployment/pytorch2onnx.py /home/pcb/Algorithm/tensorrt_inference/project/mmpose/mmpose-master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth --output-file hrnet_w48_coco_256x192.onnx Traceback (most recent call last): File "tools/deployment/pytorch2onnx.py", line 134, in assert args.opset_version == 11, 'MMPose only supports opset 11 now' AssertionError: MMPose only supports opset 11 now

pcb9382 avatar Aug 02 '22 14:08 pcb9382