mAP
mAP copied to clipboard
About Map
I have some models to test map: the first model map is 99.9% we called A the second model map is 99.9% too,we called B can we say both A and B are good? But the result A's Fp =2 , the resut B's Fp=109,obviously the model B seems not good the input boundingbox txt values are generated by opencv, and the confidence score was set very small value 0.005. I am confused how to evalute the models, and when use the model at actual situation, how to set the confidence score?
mAP is a ranking metric, so it doesn't care about the actual confidence scores but rather the ranking of those scores.
Having more false positives doesn't necessarily mean you will get a lower mAP score have a look here. But of course, if one method is giving more false positives with a similar mAP you should use the one with less FP.