Megatron-LM
Megatron-LM copied to clipboard
Support for Megatron-VLM training
In this pull request, we open source our solution for visual-language model training and inference in pure Megatron style code. In this codebase, we support:
- Megatron ViT model, and its model weight converter.
- Uneven split of pipeline parallel when the first pipeline has ViT. We find it speed up training with a large margin.
- Sequence parallel and context parallel support for VLM training (for both ViT & LM), which is non-trivial when we need to promise the ViT of all ranks receiving gradients. (Since sp and cp split sequence, some of ranks only contains text tokens.)
- Detached pp size for ViT and GPT. (Since megatron use a global mpu for all models.)
- Multi-modal inference code.
The running example is in examples/llava
folder.
Hope that our work can contribute to the open source community. If there are any questions, welcome feedback!
Hi. Thanks for creating this PR. We (NVIDIA) are actually planning to release VLM training functionality in Megatron core in the next couple of weeks. As you may have seen, we've been pushing out some preparatory code to support this. Our initial example release is going to be pretraining and SFT for a llava architecture model using llama3 and clip backbones and a general multimodal webdataset based dataloader. We're reviewing your PR internally to see if we can incorporate any of your work alongside ours and will be sure to credit you as such if we do.
Thanks again!
Thank you for your attention! Looking forward to the official implementation!
Thank you for your attention! Looking forward to the official implementation!
Hello, i have a question about this PR: how will vit and llm split in PP stage with independent_parallel = True? Thank you!
@wangxiang2713 ViT will be in the first stage of LM.
Me fale mais suas dúvidas
Em qui, 13 de jun de 2024 05:48, Qingsong Lv @.***> escreveu:
@wangxiang2713 https://github.com/wangxiang2713 ViT will be in the first stage of LM.
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/Megatron-LM/pull/806#issuecomment-2165029129, or unsubscribe https://github.com/notifications/unsubscribe-auth/BH4G62IYR7X62XTHLIL7OUTZHFMG3AVCNFSM6AAAAABHHYOBIGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRVGAZDSMJSHE . You are receiving this because you commented.Message ID: @.***>
Marking as stale. No activity in 60 days.