[#8948][feat] Support custom sharding config
Fixes #9154 Fixes #8948
Summary by CodeRabbit
- New Features
- Added manual tensor parallelism sharding configuration option for auto-deployment workflows. Users now have granular control over how individual model components—including attention layers, feedforward networks, mixture-of-experts modules, and specialized latent projections—are distributed across multiple processing units during deployment, enabling customized parallelization strategies.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
-
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
-
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
-
Test cases are provided for new code paths (see test instructions)
-
Any new dependencies have been scanned for license and vulnerabilities
-
CODEOWNERS updated if ownership changes
-
Documentation updated as needed
-
Update tava architecture diagram if there is a significant design change in PR.
-
The reviewers assigned automatically/manually are appropriate for the PR.
-
[x] Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run /bot [-h|--help] to print this help message.
See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.
--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.
--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.
--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.
--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.
--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.
--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.
--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.
--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.
--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.
--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.
--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".
--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.
--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.
For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.
kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
📝 Walkthrough
Walkthrough
The changes introduce support for manual tensor parallel (TP) sharding configuration. A new MANUAL enum value is added to the ShardingSource enum, and a new manual configuration section is added to the default YAML file that specifies sharding strategies for various layer components across Mamba, attention, and MoE layers.
Changes
| Cohort / File(s) | Change Summary |
|---|---|
Configuration tensorrt_llm/_torch/auto_deploy/config/default.yaml |
Adds new manual_config block under detect_sharding with TP sharding strategies defined for layer components: Mamba SSM (in_proj, out_proj), attention (q_proj, k_proj, v_proj, o_proj), MLP/shared experts (gate_proj, up_proj, down_proj), and MoLE latent projections (fc1_latent_proj, fc2_latent_proj). |
Utilities tensorrt_llm/_torch/auto_deploy/utils/sharding_utils.py |
Adds MANUAL = "manual" enum member to ShardingSource enum. |
Estimated code review effort
🎯 2 (Simple) | ⏱️ ~12 minutes
- Verify the
MANUALenum member is not duplicated unintentionally in the file (summary notes two declaration regions) - Validate YAML syntax and structure of the new manual configuration block
- Confirm layer component names and sharding plan values are consistent with codebase conventions
Pre-merge checks and finishing touches
❌ Failed checks (1 warning)
| Check name | Status | Explanation | Resolution |
|---|---|---|---|
| Description check | ⚠️ Warning | The pull request description is largely incomplete. Critical sections including Description and Test Coverage are empty, and the PR checklist is not properly filled out. | Complete the Description section explaining what the manual TP sharding configuration does and why it's needed (referencing issues #9154 and #8948). Add Test Coverage section listing relevant tests. Verify all checklist items are addressed. |
✅ Passed checks (2 passed)
| Check name | Status | Explanation |
|---|---|---|
| Docstring Coverage | ✅ Passed | No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check. |
| Title check | ✅ Passed | The title '[#8948][feat] Support custom sharding config' directly relates to the changeset, which adds support for manual TP sharding configuration by introducing a new MANUAL enum member and configuration in default.yaml. |
✨ Finishing touches
- [ ] 📝 Generate docstrings
🧪 Generate unit tests (beta)
- [ ] Create PR with unit tests
- [ ] Post copyable unit tests in a comment
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
/bot run
PR_Github #24590 [ run ] triggered by Bot. Commit: 71c7a07
PR_Github #24590 [ run ] completed with state SUCCESS. Commit: 71c7a07
/LLM/main/L0_MergeRequest_PR pipeline #18561 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.
/bot run --reuse-test
PR_Github #25765 [ run ] triggered by Bot. Commit: 8e313ca
PR_Github #25765 [ run ] completed with state FAILURE. Commit: 8e313ca
/LLM/main/L0_MergeRequest_PR pipeline #19540 completed with status: 'FAILURE'
/bot run
/bot run
/bot run
PR_Github #25862 [ run ] triggered by Bot. Commit: 014c899
PR_Github #25862 [ run ] completed with state SUCCESS. Commit: 014c899
/LLM/main/L0_MergeRequest_PR pipeline #19608 completed with status: 'FAILURE'
/bot run
/bot run
PR_Github #25876 [ run ] triggered by Bot. Commit: 94a5698
PR_Github #25877 [ run ] triggered by Bot. Commit: 94a5698
PR_Github #25876 [ run ] completed with state ABORTED. Commit: 94a5698
PR_Github #25877 [ run ] completed with state SUCCESS. Commit: 94a5698
/LLM/main/L0_MergeRequest_PR pipeline #19622 completed with status: 'FAILURE'
/bot run
PR_Github #25889 [ run ] triggered by Bot. Commit: 94a5698
PR_Github #25889 [ run ] completed with state FAILURE. Commit: 94a5698
/LLM/main/L0_MergeRequest_PR pipeline #19633 completed with status: 'FAILURE'
/bot run
PR_Github #25892 [ run ] triggered by Bot. Commit: 94a5698
PR_Github #25892 [ run ] completed with state FAILURE. Commit: 94a5698
/LLM/main/L0_MergeRequest_PR pipeline #19636 completed with status: 'FAILURE'
/bot run
PR_Github #26000 [ run ] triggered by Bot. Commit: 94a5698
PR_Github #26000 [ run ] completed with state FAILURE. Commit: 94a5698
/LLM/main/L0_MergeRequest_PR pipeline #19725 completed with status: 'FAILURE'
/bot run
PR_Github #26016 [ run ] triggered by Bot. Commit: 94a5698
PR_Github #26016 [ run ] completed with state FAILURE. Commit: 94a5698
/LLM/main/L0_MergeRequest_PR pipeline #19741 completed with status: 'FAILURE'