MobileNet-YOLO
MobileNet-YOLO copied to clipboard
mAP is low
Hi Eric,
Thanks for sharing this great repo with us. I am training MobileNet-YOLOv3-Lite on my own dataset, in which all images are of 608*342. I set Iter_size to be 9 and batch_size 1. The loss is stable between 6.0 and 9.0 all the time, but when I test it every 1000 iterations, the mAP is only 34%.
Could you give me some advice? I'll appreciate it a lot.
Can you show me the training logs or send it to me ?
Thanks a lot. I've sent the training log to your Gmail. I changed the resize_param (height, width) in mobilnet_yolov3_lite_test.prototxt to be 608x608. Now the mAP is about 55%. Since the actual input image is 608x342 and it is different from the resize_param (608x608), do you think that would decrease the mAP?
The gt box number are unbalance at each scale layer , like below , you may need to generate new anchors like wiki , link "Generate anchors to increase performance"
I0319 20:21:14.494683 8631 sgd_solver.cpp:121] Iteration 0, lr = 0.0001 I0319 20:21:14.515154 8631 sgd_solver.cpp:144] layer blob norm:0.001873 0.000930 0.000092 0.000267 0.000099 0.000174 0.000049 0.000284 0.000085 0.000185 0.000045 0.000375 0.000096 0.000345 0.000072 0.000295 0.000071 0.000305 0.000067 0.000264 0.000057 0.000223 0.000035 0.000156 0.000295 0.000246 0.000037 0.000193 0.000011 0.000090 0.000001 0.000289 0.000000 0.000094 0.000004 0.000221 0.000089 0.000002 0.000000 0.000288 I0319 20:21:14.544687 8631 sgd_solver.cpp:157] weight diff/data:0.001943 0.002107 0.003163 0.000833 0.003325 0.002037 0.005965 0.007235 0.005611 0.000881 0.006843 0.001876 0.006231 0.001728 0.006255 0.001246 0.006124 0.001585 0.005673 0.001140 0.005746 0.001682 0.006119 0.001543 0.010169 0.001737 0.001510 nan nan nan nan nan nan nan nan I0319 20:21:14.568569 8631 yolov3_layer.cpp:362] avg_noobj: 0.000624082 avg_obj: 0.961 avg_iou: 0.805703 avg_cat: 0.999997 recall: 1 recall75: 1 count: 1 I0319 20:21:14.575724 8631 yolov3_layer.cpp:362] avg_noobj: 0.00175672 avg_obj: 0.712236 avg_iou: 0.485117 avg_cat: 0.882156 recall: 0.522276 recall75: 0.291522 count: 8
@eric612 Thank you so much! I will try generating new anchors and check the performance.
Hi, I used that tool to generate 6 anchors.
They are listed as follows:
91.24,215.68, 166.46,429.97, 294.33,186.19, 333.13,862.41, 766.21,396.41, 973.09,1792.80
The average IOU is 0.675699.
Are they too large? It looks weird. The width_in_cfg_file and weight_in_cfg_file are 608.
It recommend to use 416 , you can try to use 416 first
The gt box number are unbalance at each scale layer , like below , you may need to generate new anchors like wiki , link "Generate anchors to increase performance"
I0319 20:21:14.494683 8631 sgd_solver.cpp:121] Iteration 0, lr = 0.0001 I0319 20:21:14.515154 8631 sgd_solver.cpp:144] layer blob norm:0.001873 0.000930 0.000092 0.000267 0.000099 0.000174 0.000049 0.000284 0.000085 0.000185 0.000045 0.000375 0.000096 0.000345 0.000072 0.000295 0.000071 0.000305 0.000067 0.000264 0.000057 0.000223 0.000035 0.000156 0.000295 0.000246 0.000037 0.000193 0.000011 0.000090 0.000001 0.000289 0.000000 0.000094 0.000004 0.000221 0.000089 0.000002 0.000000 0.000288 I0319 20:21:14.544687 8631 sgd_solver.cpp:157] weight diff/data:0.001943 0.002107 0.003163 0.000833 0.003325 0.002037 0.005965 0.007235 0.005611 0.000881 0.006843 0.001876 0.006231 0.001728 0.006255 0.001246 0.006124 0.001585 0.005673 0.001140 0.005746 0.001682 0.006119 0.001543 0.010169 0.001737 0.001510 nan nan nan nan nan nan nan nan I0319 20:21:14.568569 8631 yolov3_layer.cpp:362] avg_noobj: 0.000624082 avg_obj: 0.961 avg_iou: 0.805703 avg_cat: 0.999997 recall: 1 recall75: 1 count: 1 I0319 20:21:14.575724 8631 yolov3_layer.cpp:362] avg_noobj: 0.00175672 avg_obj: 0.712236 avg_iou: 0.485117 avg_cat: 0.882156 recall: 0.522276 recall75: 0.291522 count: 8
@eric612 Hi, would you please show me how to infer gt box number unbalance through the train log?
我设的 batchsize=2,itersize=16,map差很多,降了10个点,请问是否正常呢
weights的模型和网络结构和你的都一样
@eric612 十分感谢!