pytorch-YOLOv4
pytorch-YOLOv4 copied to clipboard
INT8 ,WARNING: Missing dynamic range for tensor 1318, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
Hi, Yolov4 int8 onnx ->engine , WARNING: Missing dynamic range for tensor 1318, expect fall back to non-int8 implementation for any layer consuming or producing given tensor 。 After the conversion , the accuracy is very low。
You need to calibrate when you convert onnx into engine in int8 mode. The demo implementation in this repository currently does not support int8 calibration. You have to implement calibration by yourself. Here are relevant links of of how to implement int8 calibration: Calibration using C++ API https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#enable_int8_c Calibration using Python API https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#optimizing_int8_python
I had to implement calibration by myself 。Warning occurs when I convert onnx into engine in int8 mode. I guess onnx has a problem with parsing。
Hi, I have the same problem while build resnet50 Engine, I had create calibrate but this issue still exist
Hi @danyue333 / @duohaoxue were you able to successfully convert ONNX into an engine in INT8 mode?
hi @ersheng-ai, is there some sample showing how to implement int8 calibration for Yolov or any other object detection model?
Hi, Yolov4 int8 onnx ->engine , WARNING: Missing dynamic range for tensor 1318, expect fall back to non-int8 implementation for any layer consuming or producing given tensor 。 After the conversion , the accuracy is very low。
The reason for this warning is a missing quantization parameter. Please refer to our open source quantization tool ppq, the quantization result is better than the quantization tool that comes with tensorrt, almost the same as the float32 model. https://github.com/openppl-public/ppq/blob/master/md_doc/deploy_trt_by_OnnxParser.md