MAE icon indicating copy to clipboard operation
MAE copied to clipboard

Custom masking

Open harsmac opened this issue 1 year ago • 11 comments

Hi, thanks for the code. You answered that we can modify the PatchShuffle class to create custom masks. However, the patch shuffle class takes the output of a Conv2d layer, making it hard to know precisely what part of the image we are masking. Is there any reason for this?

Originally posted by @wenhaowang1995 in https://github.com/IcarusWizard/MAE/issues/14#issuecomment-1548504418

harsmac avatar Nov 30 '23 14:11 harsmac

Hi,

The PatchShuffle class is doing two things in sequence:

  1. create the mask, in which the cnn output here is only help to specify the dimensions.
  2. use the mask to mask out the input.

You can of course implement these two things separately with two classes or functions. I implemented in this way only for convinent. And it is different with the official implementation since when I wrote the code, the official one was not yet released.

And it is also very straightforward to understand which patch comes from which region of the image. Say your input is 224x224 image, and patch size is 14, then you will get a 16x16 grid of patches from the conv and each patch on this grid is from a 14x14 region from the original image without overlapping.

IcarusWizard avatar Dec 01 '23 10:12 IcarusWizard

Hi, thank you for sharing the code. why did not you use sine-cosine positional embedding as it is mentioned in the paper?

amirrezadolatpour2000 avatar Feb 17 '24 12:02 amirrezadolatpour2000

I don't find where they mention of using sin-cos positional embedding in the paper. Actually, the original ViT paper clearly mentioned that a "learned" positional encoding is added after patchfication. Also for images, it is not necessary to use the sin-cos positional encoding since there is no extrapolation beyond the trained length. Could you point out where you read it?

IcarusWizard avatar Feb 17 '24 21:02 IcarusWizard

Sure, in the paper https://arxiv.org/abs/2111.06377, on page 11, first paragraph. image

amirrezadolatpour2000 avatar Feb 17 '24 21:02 amirrezadolatpour2000

ah, I see. Thanks for the reference. I didn't pay much attention to this detail. But, as I said, I don't think it will make a large difference to the result. Feel free to experiment with that.

IcarusWizard avatar Feb 17 '24 21:02 IcarusWizard

Also, I just checked their official code and they don't even follow this detail. The code uses the ViT model from timm which follows the details in the ViT paper with learned positional encoding.

IcarusWizard avatar Feb 17 '24 21:02 IcarusWizard

https://github.com/facebookresearch/mae/blob/main/models_mae.py You can see that they utilized the frozen positional embedding using the sine-cosine approach.

amirrezadolatpour2000 avatar Feb 17 '24 21:02 amirrezadolatpour2000

Ah, thanks for the correction. I had looked at a wrong file. Then I don't know why they don't like to follow the ViT architecture precisely.

IcarusWizard avatar Feb 17 '24 22:02 IcarusWizard

Based on what I studied, we do not have specific rules for choosing the positional embedding. However, I want to try the sine-cosine approach, and see the result. If I test it, I will inform you. I want to be sure that this implementation considers other details mentioned in the paper. I checked it, however, I want to be sure.

amirrezadolatpour2000 avatar Feb 17 '24 22:02 amirrezadolatpour2000

Oh, I don't think I followed all the details from the paper precisely. As in the readme, the purpose of this code is only to verify the idea of mae, not a replicate. For example, I think I didn't implement the normalization for reconstruction loss. There could be more details that I missed.

IcarusWizard avatar Feb 17 '24 22:02 IcarusWizard

I'm my own experiments, it appears that using frozen sine-cosine positional embedding speed-up learning quite significantly. I guess it makes sense because that's one thing that the network doesn't have to learn and it can focus on reconstructing the right texture.

Anyway, I just wanted to let you know. Great repo otherwise !

hugoWR avatar Jun 13 '24 20:06 hugoWR