Megatron-LM icon indicating copy to clipboard operation
Megatron-LM copied to clipboard

Hybrid Data x Context Parallelism Feature

Open parthmannan opened this issue 1 month ago • 4 comments

What does this PR do ?

Added PR for main here - https://github.com/NVIDIA/Megatron-LM/pull/2282

Design document discussed in MCore sync meeting - https://docs.google.com/document/d/1MnIPQ_VbpDNp-adtvcEv-SYx6A8rtt3-fDdxbcdrmk0/edit?usp=sharing

The first issue this MR is trying to solve is the imbalance between DP ranks when using packed sequences (for example in SFT). While packing sequences can help reduce variability in total sequence length, it does not guarantee equal workload. Attention compute is quadratic to sequence length and a single long sequence of 1k has 2x more compute than a packed sequence made of 2x512 length. This problem gets much worse when we have very large sequences and/or a large variation between sequence lengths. This MR schedules a variable number of microbatches per rank in DPxCP group to ensure balanced workload.

The second issue this MR is trying to solve is redundant CP communication. Our context parallel size is based on the full packed sequence length (usually the max seq length of all samples). For example, if a sequence of 1k requires CP2, we apply CP2 to a packed sequence of 2x512 as well. But in reality, we can easily partition the packed sequence of 2x512 into 2 GPUs by separating the 2 samples without any CP. This MR introduces dynamic context parallelism where each sample is individually scheduled with a dynamic CP group.

To achieve the above, we introduce a balanced scheduler and a dataloader wrapper. The dataloader wrapper is responsible for collecting the metadata which informs the scheduler of the sequence length of each sample across the entire global batch. This dataloader breaks up the packed sequences into individual samples as they are individually scheduled. Once we have the metadata, we can perform the scheduling using the balanced scheduler which assigns samples to ranks (across DPxCP group) and a dynamic CP group size. To avoid any deadlocks, we divide the schedule into groups (this replaces the notion of microbatches). Within each group, each rank is part of a fixed CP group. However, each rank may run different number of samples in order for all ranks to have a balanced compute.

Screenshot 2025-10-08 at 3 21 39 PM

We have run performance and correctness evaluations using the feature. Using the SFT packed dataset with max seq len of 128k and testing with LLaMa3 8B dummy model, we see 3x performance improvement with this feature. While there is room for improving the baseline itself, the speedup should remain in the 2-3x range.

This is how 128k seq len with CP16 looks like (without this feature). The GPU is bound by CP communications. Screenshot 2025-10-08 at 3 28 38 PM

This is how 128k seq len with CP16 looks like (with this feature). The GPU is bound by attention compute since all redundant comms have been removed. Screenshot 2025-10-08 at 3 30 26 PM

Feature correctness (@xiaoyao0115) hybrid_cp_loss_curve

This is the first milestone of this feature and there's many improvements that we want to make in the future releases.

  1. The feature does not support pipeline parallelism or FSDP yet. We hope to add PP support next.
  2. The feature is limited to creating dynamic groups of CP of power 2. We hope to add complete dynamic support using changes in TransformerEngine DPA.
  3. The feature does not support CUDA graphs.
  4. The feature works best with FlashAttention instead of cuDNN FusedAttention. This is because the changing lengths and CP size make cuDNN recompile the graph and all performance gains are lost. We'll advocate for dynamic support to cuDNN FusedAttention.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]

Pre-checks

  • [ ] I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • [ ] I have added relevant unit tests
  • [ ] I have added relevant functional tests
  • [ ] I have added proper typing to my code Typing guidelines
  • [ ] I have added relevant documentation
  • [ ] I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

:warning: Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either [email protected] or [email protected].

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

parthmannan avatar Oct 30 '25 22:10 parthmannan

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

copy-pr-bot[bot] avatar Oct 30 '25 22:10 copy-pr-bot[bot]

Hi @parthmannan, could you also start a main PR?

yanring avatar Nov 13 '25 00:11 yanring

Hi @parthmannan, could you also start a main PR?

Added PR for main here - https://github.com/NVIDIA/Megatron-LM/pull/2282 Will resolve conflicts shortly.

parthmannan avatar Nov 17 '25 23:11 parthmannan

/ok to test aa3858d

kunlunl avatar Nov 26 '25 13:11 kunlunl

/ok to test 60234b3

kunlunl avatar Dec 01 '25 12:12 kunlunl

/ok to test https://github.com/NVIDIA/Megatron-LM/pull/2054/commits/e2a32cb54edf51e9cb000b7e8ab4b55e58e7d846

parthmannan avatar Dec 01 '25 21:12 parthmannan

/ok to test https://github.com/NVIDIA/Megatron-LM/pull/2054/commits/d53c323bcfb6f088de4ad919adeccf615737f75a

parthmannan avatar Dec 01 '25 23:12 parthmannan

/ok to test e2a11bb

kunlunl avatar Dec 03 '25 07:12 kunlunl

/ok to test e2a11bb

@kunlunl, there was an error processing your request: E2

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/2/

copy-pr-bot[bot] avatar Dec 03 '25 07:12 copy-pr-bot[bot]

/ok to test d53c323bcfb6f088de4ad919adeccf615737f75a

kunlunl avatar Dec 03 '25 07:12 kunlunl

/ok to test https://github.com/NVIDIA/Megatron-LM/pull/2054/commits/70b9758cefdf016eb30559693f9dfc6ad4a8e246

parthmannan avatar Dec 04 '25 01:12 parthmannan

/ok to test https://github.com/NVIDIA/Megatron-LM/pull/2054/commits/9387269bf9a641293cbccd68bbbe4f1db874453d

parthmannan avatar Dec 04 '25 02:12 parthmannan

/ok to test https://github.com/NVIDIA/Megatron-LM/pull/2054/commits/2bde6c85a99988ad4c3a9d37e633908dfb3e8323

parthmannan avatar Dec 04 '25 02:12 parthmannan

/ok to test https://github.com/NVIDIA/Megatron-LM/pull/2054/commits/604ddd207fddb3f53adffb3e1190d94789a6e8b3

parthmannan avatar Dec 04 '25 21:12 parthmannan