icevision
icevision copied to clipboard
Compare performance of various mAP metrics
🚀 Feature
Following the discussion from ice-dev it was suggested to compare different mAP metrics in terms of their performance. So far what we have available:
- "default" COCOMetric
- Frederik's mAP implementation
- rafaelpadilla's project https://github.com/rafaelpadilla/review_object_detection_metrics
- potentially mAP implemented by folks at pytorch-lightning https://github.com/PyTorchLightning/pytorch-lightning/pull/4564
Quick observation: @fstroth and rafaelpadilla's project both use NumPy whereas the lightning implementation uses PyTorch. Curious how much of a speedup that would offer
use NumPy whereas the lightning implementation uses PyTorch.
Numpy would be better for us because eventually pytorch should become an optional dependency (we want to start offering tensorflow support). But I'm also curious to see the difference in speed.
Hmm, is pytorch faster than numpy when running on CPU?
Hmm, is pytorch faster than numpy when running on CPU?
It might be in some cases, but how much faster?
Numpy would be better for us because eventually pytorch should become an optional dependency (we want to start offering tensorflow support)
Makes sense. Ambitious!
Not to crowd this space, but here's another NumPy implementation, from the author of Albumentations: https://github.com/ternaus/iglovikov_helper_functions/blob/master/iglovikov_helper_functions/metrics/map.py
If we want to go for maximum performance we should look into writing a version using numba.
Best way would be to see where we stand at, following the prof Knuth advice :D
Not needed anymore.
@FraPochetti I'd keep this one in the backlog.
This is exactly the reason why I got stuck working on custom metric implementation - results of mAP calculation for the same data all mutually differ on direct pycoctools, icevision+COCOMetric, and rafaelpadilla library.