flashinfer
flashinfer copied to clipboard
perf: enable pdl for cutlass fp4 gemm
π Description
The enablePDL flag is set to false, this PR turned them on.
Set to true for both because sm_100 and sm_120 should have support of pdl.
π Related Issues
π Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
β Pre-commit Checks
- [ ] I have installed
pre-commitby runningpip install pre-commit(or used your preferred method). - [ ] I have installed the hooks with
pre-commit install. - [ ] I have run the hooks manually with
pre-commit run --all-filesand fixed any reported issues.
If you are unsure about how to set up
pre-commit, see the pre-commit documentation.
π§ͺ Tests
- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (
unittest, etc.).
Reviewer Notes
Summary by CodeRabbit
- Refactor
- Updated runtime configuration for FP4 GEMM operations to enhance execution performance on SM100 and SM120 GPU architectures.
/bot run
Summary of Changes
Hello @yzh119, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on enhancing the performance of FP4 General Matrix Multiply (GEMM) operations by activating a previously disabled optimization flag within the CUTLASS library. The modification targets specific GPU architectures, aiming to leverage their capabilities for more efficient computation.
Highlights
- Performance Optimization: The
enablePDLflag for CUTLASS FP4 GEMM operations has been explicitly set totrueto enable performance optimizations. - Architecture Support: This change applies to both
sm_100andsm_120GPU architectures, indicating support for these specific compute capabilities.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.
Walkthrough
The FP4 GEMM kernel launchers for SM100 and SM120 architectures are updated to enable PDL by changing the enablePDL flag from false to true in their respective GEMM run calls.
Changes
| Cohort / File(s) | Summary |
|---|---|
FP4 GEMM PDL Enable include/flashinfer/gemm/fp4_gemm_template_sm100.h, include/flashinfer/gemm/fp4_gemm_template_sm120.h |
Changed enablePDL parameter from false to true in the GEMM run calls for both SM100 and SM120 architectures. All error handling and initialization paths remain intact. |
Estimated code review effort
π― 1 (Trivial) | β±οΈ ~2 minutes
Poem
π° A flip of the switch, so simple and swift, PDL now enabled, a performance gift, SM100, SM120, both shine with delight, The FP4 kernels now burn oh so bright! β¨
Pre-merge checks and finishing touches
β Passed checks (3 passed)
| Check name | Status | Explanation |
|---|---|---|
| Title check | β Passed | The title 'perf: enable pdl for cutlass fp4 gemm' accurately summarizes the main change: enabling PDL for CUTLASS FP4 GEMM kernels across SM100 and SM120. |
| Description check | β Passed | The PR description includes a clear explanation of the change and the rationale, though the checklist items (pre-commit checks and tests) remain unchecked and are not explicitly addressed. |
| Docstring Coverage | β Passed | No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check. |
β¨ Finishing touches
- [ ] π Generate docstrings
π§ͺ Generate unit tests (beta)
- [ ] Create PR with unit tests
- [ ] Post copyable unit tests in a comment
π Recent review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π₯ Commits
Reviewing files that changed from the base of the PR and between cce4952fdd41b353325e11d99e1fc0b0737961ff and a028a5503c5b69c2cfa119390cc5fdc57c1f504e.
π Files selected for processing (2)
include/flashinfer/gemm/fp4_gemm_template_sm100.h(1 hunks)include/flashinfer/gemm/fp4_gemm_template_sm120.h(1 hunks)
π§° Additional context used
π§ Learnings (1)
π Learning: 2025-11-12T03:35:17.583Z
Learnt from: raayandhar
Repo: flashinfer-ai/flashinfer PR: 2070
File: include/flashinfer/gemm/bf16_gemm_cutlass_template.h:145-160
Timestamp: 2025-11-12T03:35:17.583Z
Learning: In flashinfer GEMM implementations (e.g., include/flashinfer/gemm/bf16_gemm_cutlass_template.h, fp8_gemm_cutlass_template.h), it is acceptable to catch and silently ignore std::runtime_error exceptions in getWorkspaceSizeImpl when probing multiple GEMM configurations, as some configurations may legitimately fail due to SMEM constraints. This pattern should include a comment like "// Swallow errors when SMEM exceeds maximum allowed" to document the rationale.
Applied to files:
include/flashinfer/gemm/fp4_gemm_template_sm100.hinclude/flashinfer/gemm/fp4_gemm_template_sm120.h
𧬠Code graph analysis (2)
include/flashinfer/gemm/fp4_gemm_template_sm100.h (2)
include/flashinfer/gemm/fp4_gemm_template_sm120.h (1)
gemm(42-271)include/flashinfer/trtllm/gemm/trtllmGen_gemm_export/GemmInterface.h (2)
gemm(42-347)gemm(468-488)
include/flashinfer/gemm/fp4_gemm_template_sm120.h (2)
include/flashinfer/gemm/fp4_gemm_template_sm100.h (1)
gemm(38-288)include/flashinfer/gemm/fp4_gemm_cutlass_template_sm120.h (1)
gemm(44-208)
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Deploy Docs
π Additional comments (2)
include/flashinfer/gemm/fp4_gemm_template_sm100.h (1)
276-276: SM100 PDL support confirmedβproceed with code change.SM100 (Blackwell) architecture supports PDL in CUTLASS with PDL/GDC support enabled by default via build macros. The code change is technically correct. Recommended: verify test coverage for this path in your test suite to ensure the enablePDL optimization is validated.
include/flashinfer/gemm/fp4_gemm_template_sm120.h (1)
260-260: PDL support on SM120/SM121 confirmed and tested.CUTLASS supports PDL for Blackwell (SM120/SM121) architecture. The repository includes test infrastructure with parametrized PDL testing (e.g.,
tests/utils/test_norm.pywithdevice_support_pdl()runtime checks andenable_pdlparameters), and the code pattern aligns with the existing SM100 implementation. No issues identified.
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
GitLab MR !139 has been created, and the CI pipeline #38541603 is currently running. I'll report back once the pipeline job completes.
[FAILED] Pipeline #38541603: 9/17 passed