pytorch-YOLOv4
pytorch-YOLOv4 copied to clipboard
Did you check accuracy of YOLOv4 by using your implementation on MSCOCO?
@Tianxiaomo Hi,
-
Did you check accuracy of YOLOv4 by using your implementation, do you get the same 43.5% AP (65.7% AP50) for yolov4.cfg 608x608 on MS COCO testdev?
-
Did you check, do you get the same detection result by using Darknet implementation and your implementation by using converted model? for example for this image:
+1
+1
This is what I got with the yolov4.pth, 608x608 input:
This is result on COCO validation set (val2017), input is 416*416. It seems it does not reach optimal accuracy of the original DarkNet model
conf_thresh = 0.001 NMS_IOU_thresh = 0.6
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.42601
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.65587
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.45796
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.21062
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.48669
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.60758
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.32180
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.51224
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.55351
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.32155
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.62874
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.74219
This is result from https://github.com/AlexeyAB/darknet/issues/5354
###############################################################################
# YOLOV4 416x416 CODALAB res COCO2017 VAL #
###############################################################################
overall performance
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.471
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.710
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.510
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.278
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.525
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.636
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.357
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.561
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.587
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.382
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.642
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.772
@ersheng-ai
-
Did you use converted or trained weights on Pytorch?
-
Do you use keeping aspect ration of image during resizing? YOLOv4 doesn't keep aspect ration by default.
@ersheng-ai
- Did you use converted or trained weights on Pytorch?
- Do you use keeping aspect ration of image during resizing? YOLOv4 doesn't keep aspect ration by default.
Pretrained DarkNet weights were used to generate pth. I pad zeros to images to keep aspect ratios.
@ersheng-ai Try to use resizing without keeping-aspect-ratio/padding-zeros, will accuracy be better?
@ersheng-ai Try to use resizing without keeping-aspect-ratio/padding-zeros, will accuracy be better?
I will try it
Zero padding is removed so that images of any ratio are squashed or stretched to 416 * 416 Now the results look much closer to https://github.com/AlexeyAB/darknet/issues/5354
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.46605
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.70384
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.50477
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.26692
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.52448
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.62891
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.34238
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.54948
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.59087
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.38072
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.65378
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.76268
@ersheng-ai
- github.com/AlexeyAB/darknet - 47.1% AP - 71.0% AP50
- Tianxiaomo/pytorch-YOLOv4 - 46.6% AP (-0.5) - 70.3% AP50 (-0.7)
So there is no big drop in accuracy. Great!
Do you use padding = SAME or VALID for conv and maxpooling(in SPP-block) in YOLOv4? Darknet always uses SAME padding for conv/maxpool.
@ersheng-ai
- github.com/AlexeyAB/darknet - 47.1% AP - 71.0% AP50
- Tianxiaomo/pytorch-YOLOv4 - 46.6% AP (-0.5) - 70.3% AP50 (-0.7)
So there is no big drop in accuracy. Great!
Do you use padding = SAME or VALID for conv and maxpooling(in SPP-block) in YOLOv4? Darknet always uses SAME padding for conv/maxpool.
I think the Pytorch model strictly follows the routines. The slight drop of accuracy may due to post-process like NMS. We are looking into it.
Hi,@ersheng-ai. Thank you for your great job! And I have a question! This project seems to only use yolov4' network and mosaic, but donot use Eliminate grid sensitivity,IoU threshold, CIOU loss and so on. This project get 46.6% AP while darknet yolov4 only get 47.1% AP in coco val-2017. So what matters most in getting AP as high as darknet ?
@ersheng-ai
- github.com/AlexeyAB/darknet - 47.1% AP - 71.0% AP50
- Tianxiaomo/pytorch-YOLOv4 - 46.6% AP (-0.5) - 70.3% AP50 (-0.7)
So there is no big drop in accuracy. Great! Do you use padding = SAME or VALID for conv and maxpooling(in SPP-block) in YOLOv4? Darknet always uses SAME padding for conv/maxpool.
I think the Pytorch model strictly follows the routines. The slight drop of accuracy may due to post-process like NMS. We are looking into it.
How the performance obtained, train from cspdarknet imagenet pretrained model or yolov4 coco pretrained model?
why pyorch visin should use conf_thresh = 0.001 to compute mAP?
Hi,@ersheng-ai. Thank you for your great job! I have one question: your computer hardware information? how long time you train a complete coco project?
our great job! I have one question: your computer hardware information? how long time you train a complete coco project?
Is there any code to reproduce this mAP?
@ersheng-ai How do I run validation on coco ?
How to evaluate, can you provide the command?
What files did you use and what commands did you use to implement your evaluation? Any other changes from the original gituhub code would be appreciated.