tensorrt_inference icon indicating copy to clipboard operation
tensorrt_inference copied to clipboard

cmake error

Open DataXujing opened this issue 4 years ago • 21 comments

CMakeFiles/yolov5_trt.dir/build.make:86: recipe for target 'CMakeFiles/yolov5_trt.dir/yolov5.cpp.o' failed
make[2]: *** [CMakeFiles/yolov5_trt.dir/yolov5.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/yolov5_trt.dir/all' failed
make[1]: *** [CMakeFiles/yolov5_trt.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

DataXujing avatar Dec 05 '20 09:12 DataXujing

@DataXujing Hello, please give me more information about your build environment and error outputs.

linghu8812 avatar Dec 05 '20 09:12 linghu8812

/usr/local/include/NvOnnxParser.h:27:34: fatal error: NvOnnxParserTypedefs.h: 没有那个文件或目录 compilation terminated. CMakeFiles/Yolov4_trt.dir/build.make:86: recipe for target 'CMakeFiles/Yolov4_trt.dir/Yolov4.cpp.o' failed make[2]: *** [CMakeFiles/Yolov4_trt.dir/Yolov4.cpp.o] Error 1 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/Yolov4_trt.dir/all' failed make[1]: *** [CMakeFiles/Yolov4_trt.dir/all] Error 2 Makefile:127: recipe for target 'all' failed make: *** [all] Error 2

maybe need onnx-tensorrt?

DaChaoXc avatar Dec 07 '20 01:12 DaChaoXc

@DaChaoXc Hello, my NvOnnxParser.h file is the same as https://github.com/NVIDIA/TensorRT/blob/release/7.1/include/NvOnnxParser.h, no NvOnnxParserTypedefs.h has been included, perhaps it is a TensorRT installation problem.

linghu8812 avatar Dec 07 '20 03:12 linghu8812

@linghu8812 Input filename: ../cfg/yolov4-csp.onnx ONNX IR version: 0.0.5 Opset version: 10 Producer name: darknet to ONNX example Producer version: Domain:
Model version: 0 Doc string:

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3). [12/07/2020-13:55:10] [E] [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input [12/07/2020-13:55:10] [I] [TRT] 001_convolutional:Conv -> While parsing node number 1 [BatchNormalization -> "001_convolutional_bn"]: 3 --- Begin node --- input: "001_convolutional" input: "001_convolutional_bn_scale" input: "001_convolutional_bn_bias" input: "001_convolutional_bn_mean" input: "001_convolutional_bn_var" output: "001_convolutional_bn" name: "001_convolutional_bn" op_type: "BatchNormalization" attribute { name: "epsilon" f: 1e-05 type: FLOAT } attribute { name: "momentum" f: 0.99 type: FLOAT }

--- End node --- ERROR: /home/xc/xc/code/obj/TensorRT-CenterNet-master/onnx-tensorrt/builtin_op_importers.cpp:598 In function importBatchNormalization: [6] Assertion failed: scale_weights.shape == weights_shape [12/07/2020-13:55:10] [E] Failure while parsing ONNX file start building engine [12/07/2020-13:55:10] [E] [TRT] Network must have at least one output build engine done Yolov4_trt: /home/xc/xc/code/obj/YOLO/yolov4-csp-tensorrt/includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed. 已放弃 (核心已转储)

DaChaoXc avatar Dec 07 '20 06:12 DaChaoXc

tensorrt-6.1.0.5

DaChaoXc avatar Dec 07 '20 06:12 DaChaoXc

@linghu8812 ,hello,can you upload a yolov4-csp.onnx?

DaChaoXc avatar Dec 07 '20 07:12 DaChaoXc

@DaChaoXc Hello,

linghu8812 avatar Dec 07 '20 07:12 linghu8812

@linghu8812 hello,use your onnx,the result is right, i think maybe convert onnx wrong, i use python3.6 + onnx1.5.0, the results is terrible! 1_

DaChaoXc avatar Dec 10 '20 09:12 DaChaoXc

@DaChaoXc Hello, please try the latest Yolov4/export_onnx.py

linghu8812 avatar Dec 10 '20 11:12 linghu8812

@linghu8812 hello,i used the last version,the results no change,is wrong。 ./Yolov4_trt ../config-xmish.yaml ../samples/

Input filename: ../cfg/yolov4x-mish-normal-best.onnx ONNX IR version: 0.0.5 Opset version: 10 Producer name: darknet to ONNX example Producer version: Domain:
Model version: 0 Doc string:

[12/11/2020-13:50:56] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. start building engine [12/11/2020-13:50:56] [W] [TRT] Half2 support requested on hardware without native FP16 support, performance will be negatively affected. [12/11/2020-13:50:57] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output. [12/11/2020-13:51:59] [I] [TRT] Detected 1 inputs and 4 output network tensors. build engine done writing engine file... save engine file done binding0: 5419008 binding1: 1555848 Processing: ../samples//1.jpg prepareImage prepare image take: 4.76912 ms. host2device execute Inference take: 86.2528 ms. execute success device2host post process Post process take: 216.316 ms. ../samples//1_.jpg Average processing time is 307.338ms

DaChaoXc avatar Dec 11 '20 05:12 DaChaoXc

@DaChaoXc if the number of classes is 80, the binding1 number is wrong, it should be 9446220=(21 * 21+ 42 * 42 + 84 * 84) * 3 * 85 * 4. if binding1=1555848, the class number should be 42, then the labels file should be modified. https://github.com/linghu8812/tensorrt_inference/blob/ffb65126d2fbc327f859b767566766a3b7807822/Yolov4/config-xmish.yaml#L4

linghu8812 avatar Dec 11 '20 06:12 linghu8812

@linghu8812 the number of my labels is 9, binding1=1555848=(21 * 21+ 42 * 42 + 84 * 84) * 3 * (9 + 5) * 4 why the binding0 is 5419008?

DaChaoXc avatar Dec 11 '20 08:12 DaChaoXc

@DaChaoXc

  • 5419008 = 672 * 672 * 3 * 4
  • As I mentioned above, change coco.names to your names file.

linghu8812 avatar Dec 11 '20 08:12 linghu8812

@linghu8812 hello, I changed coco.names to obj.names: car bus person bike truck motor rider traffic sign traffic light 9 labels

i changed export_onnx.py def init(self, output_tensors): """Initialize with all DarkNet default parameters used creating YOLOv4, and specify the output tensors as an OrderedDict for their output dimensions with their names as keys.

    Keyword argument:
    output_tensors -- the output tensors as an OrderedDict containing the keys'
    output dimensions
    """
    self.output_tensors = output_tensors
    self._nodes = list()
    self.graph_def = None
    self.input_tensor = None
    self.epsilon_bn = 1e-5
    self.momentum_bn = 0.99
    self.alpha_lrelu = 0.1
    self.param_dict = OrderedDict()
    self.major_node_specs = list()
    self.batch_size = 1
    self.classes = 9##############
    self.num = 9

i seems nothing changed Uploading 1_.jpg…

DaChaoXc avatar Dec 11 '20 09:12 DaChaoXc

@DaChaoXc 导出的onnx模型不受names文件的影响,只需要修改config-xmish.yaml文件中第4行的coco.names就行,names文件中的行数决定了TensorRT推理的类别个数

linghu8812 avatar Dec 11 '20 09:12 linghu8812

@linghu8812 find the difference: in cfg.file activation=linear instead of activation=logistic heng

DaChaoXc avatar Dec 11 '20 10:12 DaChaoXc

@DaChaoXc Congratulations, it's the lastest cfg file in AlexeyAB/darknet

linghu8812 avatar Dec 11 '20 11:12 linghu8812

@DaChaoXc Hello,

  • https://github.com/linghu8812/tensorrt_inference/blob/master/INSTALL.md#tensort-7134,
  • TensorRT7 building engine has a little difference with TensorRT6. it is recommend to build with TensorRT 7.1.3.4, because it supports ONNX opset 12.
  • the ONNX file can be download from here: https://pan.baidu.com/s/1KvR3Sv7V7PkznBM3ybI5OQ, the code is: 6akh

我要转scaled-yolov4-csp. 可以使用tensorRT6.0吗? 因为TensorRT7需要cuda10.2, 而我电脑是cuda10.1

lfydegithub avatar Dec 23 '20 10:12 lfydegithub

@lfydegithub 使用TensorRT 6需要在导出engine时选择opset 10; https://github.com/linghu8812/tensorrt_inference/blob/887cca1487395cc46a23537213201d224600a976/yolov5/export_onnx.py#L50 生成engine的函数需要重写,TensorRT 7和TensorRT 6不一样; https://github.com/linghu8812/tensorrt_inference/blob/887cca1487395cc46a23537213201d224600a976/includes/common/common.hpp#L114-L152 如果有Docker的话,可以使用Dockerfile build镜像。

linghu8812 avatar Dec 23 '20 14:12 linghu8812

onnxToTRTModel

多谢大佬! 还是多装了一个cuda10.2... :)

lfydegithub avatar Dec 24 '20 06:12 lfydegithub

@linghu8812

Input filename: ../cfg/yolov4-csp.onnx ONNX IR version: 0.0.5 Opset version: 10 Producer name: darknet to ONNX example Producer version: Domain: Model version: 0 Doc string: WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3). [12/07/2020-13:55:10] [E] [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input [12/07/2020-13:55:10] [I] [TRT] 001_convolutional:Conv -> While parsing node number 1 [BatchNormalization -> "001_convolutional_bn"]: 3 --- Begin node --- input: "001_convolutional" input: "001_convolutional_bn_scale" input: "001_convolutional_bn_bias" input: "001_convolutional_bn_mean" input: "001_convolutional_bn_var" output: "001_convolutional_bn" name: "001_convolutional_bn" op_type: "BatchNormalization" attribute { name: "epsilon" f: 1e-05 type: FLOAT } attribute { name: "momentum" f: 0.99 type: FLOAT }

--- End node --- ERROR: /home/xc/xc/code/obj/TensorRT-CenterNet-master/onnx-tensorrt/builtin_op_importers.cpp:598 In function importBatchNormalization: [6] Assertion failed: scale_weights.shape == weights_shape [12/07/2020-13:55:10] [E] Failure while parsing ONNX file start building engine [12/07/2020-13:55:10] [E] [TRT] Network must have at least one output build engine done Yolov4_trt: /home/xc/xc/code/obj/YOLO/yolov4-csp-tensorrt/includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed. 已放弃 (核心已转储)

hello !have you solve the problem ? i have the same problem.

lkyw5210 avatar Mar 18 '21 05:03 lkyw5210