ZegFormer icon indicating copy to clipboard operation
ZegFormer copied to clipboard

Direct evaluate coco-stuff model on ADE-20K

Open Jeff-LiangF opened this issue 2 years ago • 3 comments

@dingjiansw101 Hi Jian, thanks for your great work! I am wondering did you happen to test your trained coco-stuff model directly on the ADE-20K dataset? Because in the concurrent works, like [1][2], they all report this transfer number. It is very interesting to compare your work with counterparts. Thanks!

[1] Xu, Mengde, et al. "A simple baseline for zero-shot semantic segmentation with pre-trained vision-language model." arXiv preprint arXiv:2112.14757 (2021). [2] Ghiasi, Golnaz, et al. "Open-vocabulary image segmentation." arXiv preprint arXiv:2112.12143 (2021).

Jeff-LiangF avatar Jun 27 '22 18:06 Jeff-LiangF

Yes, I have tried before. I checked an old log of a previous experiment. The coco-stuff->ade20k-150 generalization performance is 16.4 in mIoU. But I am not sure if it is the newest model. And I need to check the details, for the comparison with other methods. Of course, you can also test it by yourself, since we have released the models and codes.

dingjiansw101 avatar Jun 27 '22 19:06 dingjiansw101

Thanks for your prompt help! It would be great if you can test your best model and report it so that the community can compare with your results. I'll try to test from my end. :)

Jeff-LiangF avatar Jun 27 '22 20:06 Jeff-LiangF

Sure, I will update the results later.

dingjiansw101 avatar Jun 27 '22 20:06 dingjiansw101