yolo2_light
yolo2_light copied to clipboard
lower object percentage using -quantized
Hi, AlexeyAB,
Thank you for the work on quantized version. I'm trying INT8 with -quantized option with yolov3-tiny cfg/weights on dog.jpg and get much lower percentage on -quantized option
yolov3-tiny without -quantized get dog/bike/truck = 81%/38%/62% (72% on car) while yolov3-tiny with -quantized dog/bike/truck = 51%/13%/61% (I set threshold to 0.1 to find bike...)
Is this correct? What makes this difference in the quantized version? Any suggestion about fixing this difference?
Thanks
@trustin77 Hi,
Is this correct?
Yes.
What makes this difference in the quantized version?
Range of float32bit-values is more than range of int8bit-values, so some of values that are more than 127 are changed to the 127 (i.e. these values are reduced - so final values will be reduced too), it is called - saturation: http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf
Any suggestion about fixing this difference?
I recommend to use lower threshold for INT8 -thresh 0.15
than for Float32 -thresh 0.25
@AlexeyAB thanks for your response,
for the saturation problem, I have read the discussion you had with others around here: https://github.com/AlexeyAB/darknet/issues/726#issuecomment-429304218
It shows the saturation issue comes mainly from the input calibration scale size? (data saturates when scale too much that causes lower percentage) Can I say that yolov3 has close detection percentage w/wo quantization results from a good input_calibration number, while yolov3-tiny.cfg uses not good enough input_cali nums that creates bad results in quantized version? If so, how can I get better input_calibration nums for yolov3-tiny.cfg?
@trustin77
I would say that saturation issue comes from the different range and precision of INT8 and FLOAT32, since 0
value can be saturated too. F.e. if we have initial values: 0.1, 0.2, 0.3, 100, 110
then:
- if
input_calibration=10
then we will get INT8 values1, 2, 3, 127, 127
, i.e. 2 values are clipped - if
input_calibration=1
then we will get INT8 values0, 0, 0, 100, 110
, i.e. 3 values are clipped
In this case may be higher input_calibration=10
parameter is better than input_calibration=1
, because only 2 values are clipped.
yolov3-tiny.cfg has good input callibration, because we get ~= the same accuracy mAP for INT8 and FLOAT32 (mAP doesn't depend on probability-threshold). So:
- with default
-thresh
you will get lower FP (false positives) and lower TP (true positives) - with lower
-thresh
you will get higher FP (false positives) and higher TP (true positives) ~= the same as for FLOAT32
If you want to recalculate input_calibration=
then you should run this command: https://github.com/AlexeyAB/darknet/issues/726#issuecomment-431360364
./darknet detector calibrate data/obj.data yolo-obj.cfg yolo-obj_10000.weights -input_calibration 100
Hi, @AlexeyAB Thanks for explanation,
How can I get these files: data/obj.data, yolo-obj.cfg, yolo-obj_10000.weights ? They're not included in yolov2_light files
And what does "100" after -input_calibration mean?
How can I get these files: data/obj.data, yolo-obj.cfg, yolo-obj_10000.weights ?
You should create data/obj.data
, yolo-obj.cfg
and train yolo-obj_10000.weights
by yourself if you want to use custom model: https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects
Or for default yolov3-tiny.cfg use:
- https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/coco.data
- https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov3-tiny.cfg
- https://pjreddie.com/media/files/yolov3-tiny.weights
- And download MS COCO dataset as described here: https://pjreddie.com/darknet/yolo/
And what does "100" after -input_calibration mean?
It means that will be loaded 100 images from train.txt
that is specified in obj.data
file, and optimal input_calibration
will be calculated for this 100 images, then you will see average input_calibration
parameters for each [convolutional] layer.
Hi, @AlexeyAB
ok, I will try to recalculate input_calibration
When I was testing yolov2-tiny -quantized, it cannot find any objects in photo. has the input_calibration value in yolov2-tiny.cfg been calibrated? Or there're other possible causes?
Thanks
@trustin77 There is calibrated only all cfg-files from this directory: https://github.com/AlexeyAB/yolo2_light/tree/master/bin
I.e. if input_calibrate=
parameter is in the cfg-file, then it is calibrated.
Hi, @AlexeyAB
I reduced train.txt and 2007_test.txt into 700 cases separately and tried ./darknet detector map data/obj.data yolov3-tiny.cfg yolov3-tiny.weights to check the mAP value. it ran over all the cases and and it got segmentation fault on displaying the percentage of different classes.
Could you check this out for me? Thank you.