deeplab2 icon indicating copy to clipboard operation
deeplab2 copied to clipboard

About `window_size` in MOAT

Open edwardyehuang opened this issue 2 years ago • 3 comments

I found window_size is None in MOAT https://github.com/google-research/deeplab2/blob/eb66c852f86c2add70cf7067dd5430ddb2df3b5f/model/pixel_encoder/moat.py#L347-L360 https://github.com/google-research/deeplab2/blob/eb66c852f86c2add70cf7067dd5430ddb2df3b5f/model/pixel_encoder/moat.py#L405

Is only global attention used for segmentation tasks?

edwardyehuang avatar Nov 01 '22 13:11 edwardyehuang

Thanks for your interest!

Please see https://github.com/google-research/deeplab2/blob/7a01a7165e97b3325ad7ea9b6bcc02d67fecd07a/model/layers/moat_blocks.py#L329 for how to specify the desired window size for the use case.

Our setting can be found in the experimental sections on paper, but I can provide the information here: For COCO object detection, we use window based attention for the third stage with size 14x14 and global attention for the fourth stage. For ADE20K semantic segmentation, we use global attention for both third and fourth stages.

Chenglin-Yang avatar Nov 05 '22 01:11 Chenglin-Yang

Thanks for your interest!

Please see

https://github.com/google-research/deeplab2/blob/7a01a7165e97b3325ad7ea9b6bcc02d67fecd07a/model/layers/moat_blocks.py#L329

for how to specify the desired window size for the use case. Our setting can be found in the experimental sections on paper, but I can provide the information here: For COCO object detection, we use window based attention for the third stage with size 14x14 and global attention for the fourth stage. For ADE20K semantic segmentation, we use global attention for both third and fourth stages.

Thanks for your point out.

I also noticed the implementation of the global window is flawed.

When using the global window size, the current implementation will still record a fixed window size, depending on the input size in the build stage. Therefore, if the given input size is different from the recorded size, the global will be limited or direct raised error (e.g., smaller input size than recorded window size)

edwardyehuang avatar Nov 11 '22 14:11 edwardyehuang

Thank you for finding this typo.

If you want to evaluate the model with an input size that is different from your training phase, you will need to create another model that is built with that input size and loads the weights. This is how the current tensorflow model works.

Chenglin-Yang avatar Nov 18 '22 19:11 Chenglin-Yang