Eric Sheng

Results 13 comments of Eric Sheng

![qr](https://user-images.githubusercontent.com/62684483/85361856-98462880-b54f-11ea-8df4-292fc0416c5b.jpg)

![Screenshot_20200713_124142](https://user-images.githubusercontent.com/62684483/87271693-9516da80-c506-11ea-9d23-03528edf4306.jpg)

![mmqrcode1595413748507](https://user-images.githubusercontent.com/62684483/88166664-34944580-cc4a-11ea-80d3-da890e7cb2fc.png)

![Screenshot_20200729_145045](https://user-images.githubusercontent.com/62684483/88767786-afaaae00-d1ac-11ea-80d2-f76efe673f7b.jpg)

![Screenshot_20200810_105345](https://user-images.githubusercontent.com/62684483/89748962-5faae000-daf8-11ea-8bc9-a2c1c567150c.jpg)

![Screenshot_20200819_100224](https://user-images.githubusercontent.com/62684483/90584618-84443d80-e205-11ea-9e62-0f07926b9c9d.jpg)

This is result on COCO validation set (val2017), input is 416*416. It seems it does not reach optimal accuracy of the original DarkNet model ``` conf_thresh = 0.001 NMS_IOU_thresh =...

> @ersheng-ai > > * Did you use **converted** or **trained** weights on Pytorch? > * Do you use keeping aspect ration of image during resizing? > YOLOv4 doesn't keep...

> @ersheng-ai Try to use resizing without keeping-aspect-ratio/padding-zeros, will accuracy be better? I will try it

Zero padding is removed so that images of any ratio are squashed or stretched to 416 * 416 Now the results look much closer to https://github.com/AlexeyAB/darknet/issues/5354 ``` Average Precision (AP)...