[SFT VLM] Added support for Molmo models via standalone script `sft_vlm_molmo`
What does this PR do?
Fixes #2136.
This PR presents a standalone version for adding support to Molmo models. It may benefit from a generalization to be compatible with sft_vlm.py
This notebook has a reproducible version, both running the script or using code directly.
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the contributor guideline, Pull Request section?
- [x] Was this discussed/approved via a GitHub issue? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines.
- [ ] Did you write any new necessary tests?
Who can review?
@lewtun @edbeeching @qgallouedec
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
HI @sergiopaniego , thanks for impementing this. Could you run make precommit to format the code so the quality tests pass (you may have to pip install pre-commit)
We are discussing internally how feasible it is to hormonize this script with the other VLM training scripts, I will let you know when we have a conclusion.
Updated!
Any updates on the harmonization discussion? Iām happy to make any modifications needed! š
@sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA?
Is it possible to set this up to train on multiple GPUs?
@sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA?
Is it possible to set this up to train on multiple GPUs?
Sorry for the late response @mshuffett. It still needs some polishing. While testing it, it seems like something is still missing from the artifacts for the model shared. You can see more details about it in the README. For example, since the grad-checkpoint is disabled, memory consumption increases a lot.
It's also not yet merged in the official transformers repo https://github.com/huggingface/transformers/pull/33962
In case anybody is looking for an updated script, I've some resources š Since the transformers PR is close to being merged, these are the resources:
- SFT Fine-tuning Colab using the HF converted version of the model thanks to @smellslikeml. I've also generated an updated Colab.
- Gist for the updated sft_vlm_molmo.py script. The transformer's PR code is currently needed for this to be useful.
- SFT model showing that the pipeline is working.
No recent activity on this branch for more than a few months, I'm closing this PR. Please feel free to reopen a PR if there is new activity.