DRS icon indicating copy to clipboard operation
DRS copied to clipboard

A question about about the validation code

Open Italy2006 opened this issue 4 years ago • 5 comments

Thank you for providing such a convenient code, but I have some questions about it. On the line 75 of eval.py, you directly use the image after crop for performance verification. Is this accurate? Another question is why not apply ms+flip to inference?

Italy2006 avatar Sep 11 '21 08:09 Italy2006

Are you talking about the DeepLabV3 code?

As I mentioned, we strictly followed the DeepLab-V3+ pytorch implementation from https://github.com/VainF/DeepLabV3Plus-Pytorch, and trained with our pseudo labels.

qjadud1994 avatar Sep 12 '21 09:09 qjadud1994

But I looked at his Deeplabv2 library, it does not use resize_eval. I read other articles, none of the libraries use Resize_eval. If you use Resize_eval, the model results will be significantly improved

Italy2006 avatar Nov 07 '21 07:11 Italy2006

Are you talking about the DeepLabV3 code?

As I mentioned, we strictly followed the DeepLab-V3+ pytorch implementation from https://github.com/VainF/DeepLabV3Plus-Pytorch, and trained with our pseudo labels.

Another question is, if you use resize_eval, what do you do when you submit the test set?

Italy2006 avatar Nov 07 '21 07:11 Italy2006

For DeepLab-V2, we did not use Resize_eval.

The result of DeepLab-V3 is our additional experiment after publishing, so we skip the test-mIoU and report only the val-mIoU following the DeepLabV3 implementation.

I recommend you measure the val-mIoU and test-mIoU without Resize_eval or use other DeepLabV3 implementations because our contribution in WSSS is the high-quality of pseudo-labels.

qjadud1994 avatar Nov 08 '21 04:11 qjadud1994

For DeepLab-V2, we did not use Resize_eval.

The result of DeepLab-V3 is our additional experiment after publishing, so we skip the test-mIoU and report only the val-mIoU following the DeepLabV3 implementation.

I recommend you measure the val-mIoU and test-mIoU without Resize_eval or use other DeepLabV3 implementations because our contribution in WSSS is the high-quality of pseudo-labels.

Thank you for your reply. But I still have a question. In the article, the result of your Deeplabv2 on the validation set is 71.2%, but why is there only 70.4% in this library? In addition, is it convenient for you to upload the code for submitting the test set?

Italy2006 avatar Nov 17 '21 11:11 Italy2006