pytorch-retinanet
pytorch-retinanet copied to clipboard
Question about precision and recall metrics
Hi I would like to know how to calcule precision, recall and specificity metrics.
Using the csv_eval.py to calcule the metrics I got the following results on my custom dataset:
mAP: 0.9294879091175865
precision: 0.2911764705882353
recall: 0.9519230769230769
Why mAP and Recall are greater than Precision, if mAP is based on Recall x Precision?
Thank you, Alan Lima
HI Alan,
the exact same question came to my mind. However, I think, the solution is the confidence score threshold which is provided / set to 0.05. The returned precision and recall values are the correspond to this threshold. Whereas the mAP is independent of this threshold, which means, it investigates all other thresholds as well. If you imagine a PR-Curve, the set threshold would be on the right side of that plot - Thus you have a very good recall but a very low precision. You can also plot the PR curve in the CSVeval function - I think it becomes much clearer then. Yet, I think the PR-Curve should include all values until it reaches the right side, and should not stop early.
A nice animation explaining the confidence threshold and its relation to precision recall can be found here: https://blog.zenggyu.com/en/post/2018-12-16/an-introduction-to-evaluation-metrics-for-object-detection/