TensorRT-Yolov3 icon indicating copy to clipboard operation
TensorRT-Yolov3 copied to clipboard

Why int8 mode performance worse on jetson tx2?

Open jfangah opened this issue 5 years ago • 1 comments

Thanks for your work! But I'm confuse why the int8 model performance worse on jetson tx2. The inference time of fp32 416 model is about 250ms and the inference time of fp16 416 model is about 200ms, but the inference time of int8 model is about 300ms. I want to know why the int8 model worked on x86 but failed on tx2.

jfangah avatar Aug 21 '19 10:08 jfangah

It seems that TX2 doesn't support int8,int8 calibration support on TX2

I also test the yolo3-416(fp16) speed on TX2 ,it's about 211ms,The same config performance is about 14ms per image on my GTX1060. Did you have test tiny-yolo3-trt performance on TX2?

ElonKou avatar Sep 06 '19 10:09 ElonKou