geo-deep-learning icon indicating copy to clipboard operation
geo-deep-learning copied to clipboard

Benchmark: need to have per-image metrics and detailed report

Open remtav opened this issue 4 years ago • 2 comments

Currently, when using inference.py with ground truth files, we average metrics over all predictions made.

It would be important to keep per-image metrics for benchmarking purposes. For example, this could help us identify if the model performs better on images from a particular ecozone, season, time of day, etc.

This shows the need for a benchmarking platform that could output a complete report on a certain model's performance.

remtav avatar Aug 25 '20 22:08 remtav

This has been resolved in the new Benchmarking implementation recently merged!?

valhassan avatar Aug 28 '20 17:08 valhassan

https://smp.readthedocs.io/en/latest/metrics.html#segmentation_models_pytorch.metrics.functional.iou_score

CharlesAuthier avatar Apr 12 '22 17:04 CharlesAuthier