zsseg.baseline
zsseg.baseline copied to clipboard
Evaluation problem on zero-shot COCO dataset
Thank you for your excellent work. But I have one question about evaluation.
When I run evaluation on COCO dataset for zero-shot setting, I found that if I set the DATASET.TEST to "coco_2017_test_stuff_sem_seg", I got the results:
"miou-base": 37.7 "miou-unbase" 36.8
It seems to be normal.
But when I set the DATASET.TEST to "coco_2017_test_stuff_base_sem_seg" and "coco_2017_test_stuff_novel_sem_seg" respectively, I got the results:
"miou": 30.8 (for base) "miou": 66.0 (for novel)
I wonder why this weird variation exists, thanks!
Thank you for your excellent work. But I have one question about evaluation.
When I run evaluation on COCO dataset for zero-shot setting, I found that if I set the DATASET.TEST to "coco_2017_test_stuff_sem_seg", I got the results:
"miou-base": 37.7 "miou-unbase" 36.8
It seems to be normal.But when I set the DATASET.TEST to "coco_2017_test_stuff_base_sem_seg" and "coco_2017_test_stuff_novel_sem_seg" respectively, I got the results:
"miou": 30.8 (for base) "miou": 66.0 (for novel)
I wonder why this weird variation exists, thanks!
Hello, have you solved this question? I have the same question. In general, the result of "coco_2017_test_stuff_base_sem_seg" should be higher than "miou-base". Do you know the reason behind this strange phenomenon? Thank you so much.
Hello, have you solved this question? I have the same question. In general, the result of "coco_2017_test_stuff_base_sem_seg" should be higher than "miou-base". Do you know the reason behind this strange phenomenon? Thank you so much.
Sorry, I haven't solved it yet.
But when I set the DATASET.TEST to "coco_2017_test_stuff_base_sem_seg" and "coco_2017_test_stuff_novel_sem_seg" respectively, I got the results:
"miou": 30.8 (for base) "miou": 66.0 (for novel)
Sorry for late reply. I have forgot the details of the code and I have no idea why it happened. If you found any possible errors that may cause the problem, please tell me or open a pull request. Thanks very much.