mmdetection-to-tensorrt icon indicating copy to clipboard operation
mmdetection-to-tensorrt copied to clipboard

yolov3 can not use int8 calibration?

Open maggiez0138 opened this issue 4 years ago • 3 comments

Thanks for your great job. I'm trying int8 conversion. Using SSD, the calibration has been called, and ran normally. But using yolov3, the calibration process did not run, although the engine was generated finally.

I've tried several configs, shape, batch, and entropy/minmax mode, still no. Any ideas? Thanks in advance.

Settings: # for yolov3 320 opt_shape_param = [ [ [1, 3, 320, 320], [1, 3, 320, 320], [1, 3, 320, 320], ] ]

Environment: GPU: Tesla V100 nvidia-diver:418.152.00 cuda: 11.0 cudnn: 8.0 tensorrt: 7.1.2.8 pytorch: 1.7 torchvision: 0.8 mmdetection: 2.7

maggiez0138 avatar Dec 29 '20 07:12 maggiez0138

I met the same problem. Could anyone supply any suggestion? Thanks in advance...

shiyongming avatar Jan 05 '21 01:01 shiyongming

I met the same problem. Could anyone supply any suggestion? Thanks in advance...

Please refer to our open source quantization tool ppq, the quantization effect is better than the calibration of tensorrt, if you encounter issues, we can help you solve them. https://github.com/openppl-public/ppq/blob/master/md_doc/deploy_trt_by_OnnxParser.md

Lenan22 avatar Oct 31 '22 02:10 Lenan22

Thanks for your great job. I'm trying int8 conversion. Using SSD, the calibration has been called, and ran normally. But using yolov3, the calibration process did not run, although the engine was generated finally.

I've tried several configs, shape, batch, and entropy/minmax mode, still no. Any ideas? Thanks in advance.

Settings: # for yolov3 320 opt_shape_param = [ [ [1, 3, 320, 320], [1, 3, 320, 320], [1, 3, 320, 320], ] ]

Environment: GPU: Tesla V100 nvidia-diver:418.152.00 cuda: 11.0 cudnn: 8.0 tensorrt: 7.1.2.8 pytorch: 1.7 torchvision: 0.8 mmdetection: 2.7 Please refer to our open source quantization tool ppq, the quantization effect is better than the calibration of tensorrt, if you encounter issues, we can help you solve them. https://github.com/openppl-public/ppq/blob/master/md_doc/deploy_trt_by_OnnxParser.md

Lenan22 avatar Oct 31 '22 02:10 Lenan22