transformers
transformers copied to clipboard
MSN (Masked Siamese Networks) for ViT
What does this PR do?
Adds the MSN checkpoints for ViT. MSN shines in the few-shot regimes which would benefit real-world use cases. Later we could add a pre-training script so that people can actually perform pre-training with MSN with their own datasets.
Closes #18758
Who can review?
@sgugger @NielsRogge @amyeroberts
TODO
- [x] Add documentation
- [x] Add rest of the files for repo consistency
- [ ] Host MSN weights on the Facebook org on HF Hub (@NielsRogge ?)
- [ ] Change the checkpoint paths wherever needed
The documentation is not available anymore as the PR was closed or merged.
@NielsRogge, after studying the pretraining script of MSN thoroughly I am still unsure of how to put together a ViTMSNForPretraining similar to ViTMAEForPreTraining. There are multiple moving pieces that I think are best off residing inside a standalone pretraining script:
- A target encoder updated with EMA.
- Learnable prototypes that are needed to compute the final MSN loss.
- Target sharpening amongst other things.
Both the EMA and sharpening components operate with their own schedules.
Given this, I think it's best to resort to a separate pre-training script and use this model for feature extraction and fine-tuning.
There's an ongoing discussion around releasing the weights of the linear classification layers and fine-tuned models. So when that's available, we could directly support those via ViTMSNForImageClassification. Regardless, I am happy to add a ViTMSNForImageClassification for easy access.
What do you think?
Thanks for your PR! It would be great to have the ViTMSNForImageClassification even if there are no released weights for image classification, so users can already fine-tune the main checkpoint if they want.
For pretraining, if multiple new pieces are needed, maybe it could go in a research project at first, where you can add more modules?
For pretraining, if multiple new pieces are needed, maybe it could go in a research project at first, where you can add more modules?
Sounds good to me.
Thanks for your PR! It would be great to have the ViTMSNForImageClassification even if there are no released weights for image classification, so users can already fine-tune the main checkpoint if they want.
Sure, I will continue the work from here on then. Thank you!
@sgugger @NielsRogge @amyeroberts ready for review.
@sgugger @NielsRogge @amyeroberts a friendly nudge on the PR.
@sgugger addressed your comments. After the weights are transferred to the right org, I will open a PR there adding README.
Hi @sayakpaul . First, thank you for this PR 🤗 .
The doctest for this model is currently failing, as https://github.com/huggingface/transformers/blob/7e84723fe4e9a232e5e27dc38aed373c0c7ab94a/src/transformers/models/vit_msn/modeling_vit_msn.py#L646 this outputs the predicted label, but there is no expected value provided.
The config has LABEL_0 ... LABEL_999 in id2label, but I feel it should be the actual labels for the COCO dataset.
Could you take a look for this config, as well as the missing expected outputs for the doctest? Thank you!
Here is the failing doctest job:
https://github.com/huggingface/transformers/actions/runs/3109562462/jobs/5039877349
The config has LABEL_0 ... LABEL_999 in id2label, but I feel it should be the actual labels for the COCO dataset.
The model was trained on ImageNet-1k.
I will add the expected outputs. Thanks for flagging it.