yolov3-tf2
yolov3-tf2 copied to clipboard
How to obtain a good accuracy + how to get mAP?
Hi, I am training on a custom dataset with yolo tiny and 1 class. The model only provides an accuracy measure. Is there a way to obtain mAP? If not can anyone explain how I can create my own script to obtain mAP? I just noticed that mAP is the metric used in a lot of the deep learning object detection papers I have been reading so that is why I am interested in this.
Secondly, I trained on my custom dataset that has 3000 images. I used pre-trained weights and only got an accuracy of 30%. Is this normal? Also in the official darknet implementation, it said that loss should be less than 1. My validation loss was 12 and train loss was 5 after 30 epochs. However, my model pretty much plateaus after 30 epoch so I don't see much improvement. So is there something wrong with these numbers?
Also, I made some improvements with my dataset and got a 10 percent increase in accuracy when I tried it again(40% accuracy). So making improvements and removing bad data might help more but I'm not too sure. I need some confirmation related to whether the accuracy and loss values I got are normal for tiny.
- I loaded the pre-trained weights as described in issue #48 (it worked for me). I loaded the darknet weights and trained the top layers as described.
Thank you.
mAP calculation is pretty hard to implement, i have looked around on the internet hoping to find a clean algorithm i can borrow from. But unfortunately i haven't found a good implementation and also did not have time to write one from scratch. I definitely want to have mAP score myself too, but its just too hard to code.
Hi @zzh8829, I will try to implement a mAP score. I only have 1 class for training which might make it slightly easier. I am also thinking of implementing mAP post-training rather than during training.
I also found this repo, which might be helpful. https://github.com/Cartucho/mAP#explanation
Also, do you know how accurate the yolov3 tiny model can get? I have around 4000 images for training I only get around a 30 percent accuracy in tiny(around 50 percent in just v3). I think it's because I have bad data so I relabelled and got more images and I am going to train again. But I just want to know the general accuracy score for yolo tiny so I can compare.
Thank you, Robin
Please check out the new training tutorial for getting good accuracy
It’s probably better to standardize on one implementation in order to avoid potential accuracy loss as a result of implementation details and to make the results more reproducible. We could probably just rely on the PyCOCO API to report the mAP as shown here:
https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
This appears to be the same flow that Tensorflow/MLPerf relies on as well.
https://github.com/tensorflow/models/blob/master/research/object_detection/metrics/coco_tools.py
https://github.com/mlperf/inference/blob/master/v0.5/classification_and_detection/tools/accuracy-coco.py
@zzh8829 any update on this issue?? Thanks
A good general option to measure the Mean Average Precision is using the development of João Cartucho (it uses txt files).
✔️ https://github.com/Cartucho/mAP
Also, the pip package from Simon Klimaschka based on Cartucho's work (it uses dictionaries maybe is useful for small datasets) ✔️ https://pypi.org/project/mapcalc/ ✔️ https://github.com/LeMuecke/mapcalc
I'll try this way to solve the problem in my project.
For COCO dataset, perhaps you can use the pycocotools API: https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb