TensorRT-Yolov3 icon indicating copy to clipboard operation
TensorRT-Yolov3 copied to clipboard

the size of engine file

Open guods opened this issue 5 years ago • 5 comments

I create the int8 engine file on GPU 1060, the size of engine file is only 60M, the detection result is very poor. But I create the int8 engine file on GPU TiTAN X, the size of engine file is only 500-600M, the detection result is good.

guods avatar Sep 11 '19 08:09 guods

Are they same calibrations data feeding? And are they the same tensorRT version? My int8 engine by 1060 is also the 60M. Maybe your TiTAN X is created as mode fp32? I am very interesting in your results. Please inform me.Thanks

lewes6369 avatar Sep 13 '19 16:09 lewes6369

They are same calibrations data feeding and tensorRT version, my TITAN X is create as model int8. The detection results is good on your 1060 as model int8? The lager defferences of detetcion results between int8 and float16?

guods avatar Sep 17 '19 02:09 guods

I created the int8 engine in 1060 1070, 2060 and T4, The speed become more and more slower, the test results become more and more better, and the files become more and more bigger.

guods avatar Sep 17 '19 02:09 guods

when I create Int8 engine on TiTan X, the warings "Int8 supprot resquested on hardware without native Int8 support, preformance will be negatively affected ...", but the engine file is created and I test the images and the detetction results is good, the speed become faster. The defference of sizes of engine file between int8 and float16 is small. You meet the problem? Could I communicate with you through QQ? My QQ number is 1120444895. Looking forward to your addition。

guods avatar Sep 17 '19 03:09 guods

I have solved it, the accuracy is poor because I only used one image as the calibration data . The ImageNet need 500 images to calibrate the engine, the result become good when I used 100 images as the calibration data.

guods avatar Sep 20 '19 05:09 guods