M4 + Dist Checkpoint: Replace global parallel state with explicit group parameters
What does this PR do ?
This PR refactors the parallel group management to eliminate dependencies on global parallel_state.xxx APIs and instead use explicit group parameters (tp_group, pp_group, dp_cp_group) with fallbacks to existing global state when not provided.
Key Changes
1. Explicit Group Parameters
- Added
tp_group,pp_group,dp_cp_groupparameters to key functions in:megatron/training/checkpointing.pymegatron/training/utils.pymegatron/core/utils.py
- Functions now accept
Optional[torch.distributed.ProcessGroup]parameters withNonedefaults - When groups are
None, code falls back to existingmpu.get_xxx_group()APIs for backward compatibility
2. Enhanced Metadata Handling
- Extended
_build_sharded_state_dict_metadata()to includedp_cp_groupin metadata - Updated sharded state dict generation to properly propagate group information
dp_cp_groupnow consistently sourced from metadata across checkpoint operations
3. Improved Group Sourcing Strategy
- Tensor/Pipeline Groups: Sourced directly from
module.tp_groupandmodule.pp_group - Data Parallel + Context Parallel Group: Sourced from metadata to ensure consistency across save/load operations
- Utilizes
get_pg_size()andget_pg_rank()utilities for group introspection
4. Function Signature Updates
Key functions updated with explicit group parameters:
save_checkpoint()load_checkpoint()get_rng_state()_build_sharded_state_dict_metadata()
Contribution process
flowchart LR
A[Pre-checks] --> B[PR Tests]
subgraph Code Review/Approval
C1[Expert Review] --> C2[Final Review]
end
B --> C1
C2 --> D[Merge]
Pre-checks
- [ ] I want this PR in a versioned release and have added the appropriate Milestone (e.g.,
Core 0.8) - [ ] I have added relevant unit tests
- [ ] I have added relevant functional tests
- [ ] I have added proper typing to my code Typing guidelines
- [ ] I have added relevant documentation
- [ ] I have run the autoformatter.sh on my PR
Code review
The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.
For MRs into `main` branch
(Step 1): Add PR label Expert Review
(Step 2): Collect the expert reviewers reviews
- Attach the
Expert Reviewlabel when your PR is ready for review. - GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.
:warning: Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
- Add
Final Reviewlabel - GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.
(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either [email protected] or [email protected].
Merging your PR
Any member of core-adlr and core-nemo will be able to merge your PR.
This pull request requires additional validation before any workflows can run on NVIDIA's runners.
Pull request vetters can view their responsibilities here.
Contributors can view more details about this message here.
/ok to test 6e9af691aa4fa51dd51df2ddbc1e40914a30322f
/ok to test 146d3c137816b312417e06dbc228f6e9ddbc27a3
/ok to test 146d3c137816b312417e06dbc228f6e9ddbc27a3
/ok to test 146d3c137816b312417e06dbc228f6e9ddbc27a3
@dimapihtar, there was an error processing your request: E2
See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/2/
/ok to test 3a203d82cdd044e3cc09de767d43f66f4d619670
/ok to test 8bf822011d8411d6fb85da2e0b1baf6a614032ab
/ok to test b33362dbf2d10e5f2faa7f0a8bae69e8121a3130
/ok to test 1311d81fe80f3da3c19981259e7dd61f6a71a3b2
/ok to test 5e375a29e8a59ec4c70ca66c5a455b6b13e4be3f
/ok to test 5e375a29e8a59ec4c70ca66c5a455b6b13e4be3f
@dimapihtar, there was an error processing your request: E2
See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/2/
/ok to test 12cc8983604474705a729da74f6126ff9ccfa474
/ok to test 9cbb410d757acd83f02f6c486b4b66fdc40694c4
/ok to test a599cd1ce2332619dab15abba358ad0973fb6121
/ok to test 4e11170a35bf3fb04fb176454d374ab09d19baa2
/ok to test d10b644b5453739f3ff558a23e2bae99ca383c0f
/ok to test e8a1f8439a6ee2502d5de77e40f507e7e8ee3551
/ok to test 93e6099f13332d1dd125dcf90321e9fc36e6620f
/ok to test 34565751e541dcd22c3840468f3ce844f4818746
/ok to test 0fc5fa0d71ce56d0890558ed87636579e8e4546c
/ok to test d59a9936e71d9f429a2023a1f22ef3c64772fb9e
/ok to test 834447456f4e59c08496f730206a98fbae69444a