semantic-segmentation-pytorch
semantic-segmentation-pytorch copied to clipboard
Why mIoU uses sum(intersect)/sum(union) instead of mean over IoU?
I found the mIoU evaluation codes here: https://github.com/CSAILVision/semantic-segmentation-pytorch/blob/8f27c9b97d2ca7c6e05333d5766d144bf7d8c31b/eval.py#L98
I am wondering what is the motivation of using sum(intersect)/sum(union) instead of the mean over sample-wise intersect / union? The former one seems to disregard the testing samples are iid.
Thanks!
This appears to be a common issue in semantic segmentation. Some use global IoU, while others use the average per-image IoU.
https://github.com/IvLabs/stagewise-knowledge-distillation/issues/12#issuecomment-650696538 https://stats.stackexchange.com/questions/554724/is-there-an-official-procedure-to-compute-miou-mean-intersection-over-union
I've noticed others raising similar queries, and unfortunately, there's no universally accepted method. I think it's crucial to specify which approach you're using, and to ensure that you compare the same mIoU when publishing a paper.