TensorRT-CenterNet
TensorRT-CenterNet copied to clipboard
How to quantize model to int8?
Hi, I notice that the ctdet_dlav0 model with int8 mode is reallly fast, so how do you transform fp32 to int8?Could you provide the model? Thanks.
@lsccccc
- download ctdet_coco_dlav0_1x.pth from here.
- convert ctdet_coco_dlav0_1x.pth to ctdet_coco_dlav0_1x.onnx.
- build engine
./buildEngine -i ctdet_coco_dlav0_1x.onnx -o model/ctdet_coco_dlav0_1x.engine -m 2 -c calib_img_list.txt
@CaoWGG Thank you, I will have a try.
@CaoWGG Does the image in calib_img_list need to be preprocessed in the same way of CenterNet preprocess?
@lsccccc You need to put the image path in *.txt. image preprocessing can refer to here.