yolo2_light icon indicating copy to clipboard operation
yolo2_light copied to clipboard

lower object percentage using -quantized

Open rimu123 opened this issue 6 years ago • 8 comments

Hi, thanks for the great work! I developed a new model like mobilenet-yolov3. I get lower object percentage when using -quantized . It is about 10 percentage lower than without quantized in GPU. I am very sure I add input_calibration generated by myself dataset in cfg. I heard that the small model with quantification has very low precision. I don't know if it is correct, can you give me some advice? thank you!

rimu123 avatar Dec 21 '18 10:12 rimu123

@rimu123 Hi,

I get lower object percentage when using -quantized

I recommend to use lower threshold for INT8 -thresh 0.15 than for Float32 -thresh 0.25. It will solve your issue.


I heard that the small model with quantification has very low precision. I don't know if it is correct, can you give me some advice?

Just check your mAP with -quantized and without it.

  • /darknet detector map data/obj.data yolo_obj.cfg data/yolo_obj_10000.weights -thresh 0.25

  • /darknet detector map data/obj.data yolo_obj.cfg data/yolo_obj_10000.weights -thresh 0.15 -quantized

Usually there is much lower probability of class 15 instead of 25, i.e. -60%. But there is small drop in accuracy mAP, something like: 75% instead of 78%, i.e. only -1%.

More: https://github.com/AlexeyAB/yolo2_light/issues/24

AlexeyAB avatar Dec 21 '18 10:12 AlexeyAB

@AlexeyAB thank you for your reply. Yes ,I did it follow you. for this: /darknet detector map data/obj.data yolo_obj.cfg data/yolo_obj_10000.weights -thresh 0.25 mAP = 50.3% for this: /darknet detector map data/obj.data yolo_obj.cfg data/yolo_obj_10000.weights -thresh 0.15 -quantized mAP = 40.6% There is big drop in accuracy mAP i.e. -10%. I think that is not normal. I disable quantized for first few layers. but it does not work. Now I don't know how to do.

rimu123 avatar Dec 21 '18 12:12 rimu123

I disable quantized for first few layers. but it does not work. Now I don't know how to do.

  • How did you do it?

  • What model do you use, is it yolov3.cfg?

  • How many iterations did you train?

  • Try to use default input_calibration= parameters

AlexeyAB avatar Dec 21 '18 12:12 AlexeyAB

@AlexeyAB thank you for your reply.

How did you do it?

I modified the line of code in additionally.c. #3209

if (params.index == 0 || activation == LINEAR || (params.index > 1 && stride>1) || size==1)
        quantized = 0; // disable Quantized for 1st and last layers

for:

if (params.index < 4 || activation == LINEAR || (params.index > 4 && stride>1) || size==1)
      quantized = 0; // disable Quantized for 1st and last layers

I think that means disable quantized for first three layers.

What model do you use, is it yolov3.cfg?

I developed a new model that base bone is peleenet, a small classification network. the model has 27M and 192 layers.

How many iterations did you train?

I trained the new model 300k for iterations. I think that mean the model has converged.

Try to use default input_calibration= parameters

I use default input_calibration= parameters of yolov3.cfg for model, mAP = 45.7, about 5% drop. It is surprising. I do not know why. I am very sure I used validation set to generate input_calibration. i.e. input_calibration = 15.497, 12.2827, 15.2862, 15.7211, 15.497, 15.8132, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.497, 15.8132, 15.8132, 15.8132, 15.8132, 15.8132, 13.7261, 15.497, 15.497, 15.8132, 15.8132, 15.8132, 15.8132, 15.8132, 15.8132, 15.497, 15.497, 15.8132, 15.8132, 15.8132, 15.8132, 15.8132, 15.8132, 16 why it is worse effect than default input_calibration?

rimu123 avatar Dec 22 '18 04:12 rimu123

@rimu123

if (params.index < 4 || activation == LINEAR || (params.index > 4 && stride>1) || size==1) quantized = 0; // disable Quantized for 1st and last layers I think that means disable quantized for first three layers.

Yes.


why it is worse effect than default input_calibration?

May be something wrong in my code for calibration.


I use default input_calibration= parameters of yolov3.cfg for model, mAP = 45.7, about 5% drop. It is surprising. I do not know why. I am very sure I used validation set to generate input_calibration. i.e. input_calibration = 15.497, 12.2827, 15.2862, 15.7211, ....

Also you can try to tune input calibration manually. Just use 48 instead of 40, or 32 instead of 40 and check the mAP: https://github.com/AlexeyAB/yolo2_light/blob/117f196518597a902bc4c0552724f4fb99f09e3d/bin/yolov3.cfg#L25

AlexeyAB avatar Dec 22 '18 09:12 AlexeyAB

@AlexeyAB thank you for your reply. I tried to use 48, 32 instead of 40, and some of their combinations. In addition, I modified the first few input calibration learning from myself input_calibration. The best mAP = 45.8%. Maybe this is the best result. Of course, such modifications are blind. By the way, a lot of 40 in input_calibration, whether it mean that the dynamic range of the output of each layer is fixed? Similarly, whether it also reflects some of the characteristics of cnn?

rimu123 avatar Dec 22 '18 12:12 rimu123

@rimu123 hi when applied quantization did ur weight size reduce ?/

abhigoku10 avatar Jul 15 '19 09:07 abhigoku10

@AlexeyAB Hello,

  1. Does -quantized work only in the yolo2_light repo?

  2. What'more, I found whenever I ran with -quantized or without it, the prediction time always equals to 0.00000 as follows: Loading weights from yolov3-tiny_final_572.weights...Done! test1.jpg: Predicted in 0.000000 seconds. classes= 1: 100% (left_x: 69 top_y: 251 width: 83 height: 98) classes= 1: 100% (left_x: 248 top_y: 174 width: 74 height: 81) classes= 1: 100% (left_x: 454 top_y: 206 width: 62 height: 88) classes= 1: 92% (left_x: 461 top_y: 352 width: 80 height: 97) classes= 1: 97% (left_x: 704 top_y: 141 width: 68 height: 76) classes= 1: 99% (left_x: 779 top_y: 282 width: 84 height: 90) Not compiled with OpenCV, saving to predictions.png instead Could you tell me where maybe the bug is?

  3. When I calculate mAP, it shows: Set -points flag: -points 101 for MS COCO -points 11 for PascalVOC 2007 (uncomment difficult in voc.data) -points 0 (AUC) for ImageNet, PascalVOC 2010-2012, your custom dataset in the end. Could you clarify what's the meaning of it?

  4. At last, in the Yolo2_lights repo, what's the meaning of difficult=data/difficult_2007_test.txt in the voc.data file? If I don't run the voc_label_difficult.py, and set difficult the same as valid, is it ok?

Thank you in advance!!!

ReekiLee avatar Mar 30 '20 15:03 ReekiLee