TensorRT-Yolov3
TensorRT-Yolov3 copied to clipboard
Why int8 mode performance worse on jetson tx2?
Thanks for your work! But I'm confuse why the int8 model performance worse on jetson tx2. The inference time of fp32 416 model is about 250ms and the inference time of fp16 416 model is about 200ms, but the inference time of int8 model is about 300ms. I want to know why the int8 model worked on x86 but failed on tx2.
It seems that TX2 doesn't support int8,int8 calibration support on TX2
I also test the yolo3-416(fp16) speed on TX2 ,it's about 211ms,The same config performance is about 14ms per image on my GTX1060. Did you have test tiny-yolo3-trt performance on TX2?