transformers icon indicating copy to clipboard operation
transformers copied to clipboard

MSN (Masked Siamese Networks) for ViT

Open sayakpaul opened this issue 3 years ago • 6 comments

What does this PR do?

Adds the MSN checkpoints for ViT. MSN shines in the few-shot regimes which would benefit real-world use cases. Later we could add a pre-training script so that people can actually perform pre-training with MSN with their own datasets.

Closes #18758

Who can review?

@sgugger @NielsRogge @amyeroberts

TODO

  • [x] Add documentation
  • [x] Add rest of the files for repo consistency
  • [ ] Host MSN weights on the Facebook org on HF Hub (@NielsRogge ?)
  • [ ] Change the checkpoint paths wherever needed

sayakpaul avatar Aug 30 '22 11:08 sayakpaul

The documentation is not available anymore as the PR was closed or merged.

@NielsRogge, after studying the pretraining script of MSN thoroughly I am still unsure of how to put together a ViTMSNForPretraining similar to ViTMAEForPreTraining. There are multiple moving pieces that I think are best off residing inside a standalone pretraining script:

Both the EMA and sharpening components operate with their own schedules.

Given this, I think it's best to resort to a separate pre-training script and use this model for feature extraction and fine-tuning.

There's an ongoing discussion around releasing the weights of the linear classification layers and fine-tuned models. So when that's available, we could directly support those via ViTMSNForImageClassification. Regardless, I am happy to add a ViTMSNForImageClassification for easy access.

What do you think?

sayakpaul avatar Aug 31 '22 03:08 sayakpaul

Thanks for your PR! It would be great to have the ViTMSNForImageClassification even if there are no released weights for image classification, so users can already fine-tune the main checkpoint if they want.

For pretraining, if multiple new pieces are needed, maybe it could go in a research project at first, where you can add more modules?

sgugger avatar Sep 01 '22 11:09 sgugger

For pretraining, if multiple new pieces are needed, maybe it could go in a research project at first, where you can add more modules?

Sounds good to me.

Thanks for your PR! It would be great to have the ViTMSNForImageClassification even if there are no released weights for image classification, so users can already fine-tune the main checkpoint if they want.

Sure, I will continue the work from here on then. Thank you!

sayakpaul avatar Sep 01 '22 11:09 sayakpaul

@sgugger @NielsRogge @amyeroberts ready for review.

sayakpaul avatar Sep 13 '22 04:09 sayakpaul

@sgugger @NielsRogge @amyeroberts a friendly nudge on the PR.

sayakpaul avatar Sep 20 '22 08:09 sayakpaul

@sgugger addressed your comments. After the weights are transferred to the right org, I will open a PR there adding README.

sayakpaul avatar Sep 22 '22 04:09 sayakpaul

Hi @sayakpaul . First, thank you for this PR 🤗 .

The doctest for this model is currently failing, as https://github.com/huggingface/transformers/blob/7e84723fe4e9a232e5e27dc38aed373c0c7ab94a/src/transformers/models/vit_msn/modeling_vit_msn.py#L646 this outputs the predicted label, but there is no expected value provided.

The config has LABEL_0 ... LABEL_999 in id2label, but I feel it should be the actual labels for the COCO dataset.

Could you take a look for this config, as well as the missing expected outputs for the doctest? Thank you!

Here is the failing doctest job:

https://github.com/huggingface/transformers/actions/runs/3109562462/jobs/5039877349

ydshieh avatar Sep 23 '22 15:09 ydshieh

The config has LABEL_0 ... LABEL_999 in id2label, but I feel it should be the actual labels for the COCO dataset.

The model was trained on ImageNet-1k.

I will add the expected outputs. Thanks for flagging it.

sayakpaul avatar Sep 23 '22 15:09 sayakpaul