flashinfer icon indicating copy to clipboard operation
flashinfer copied to clipboard

perf: enable pdl for cutlass fp4 gemm

Open yzh119 opened this issue 2 weeks ago β€’ 5 comments

πŸ“Œ Description

The enablePDL flag is set to false, this PR turned them on. Set to true for both because sm_100 and sm_120 should have support of pdl.

πŸ” Related Issues

πŸš€ Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

βœ… Pre-commit Checks

  • [ ] I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • [ ] I have installed the hooks with pre-commit install.
  • [ ] I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

πŸ§ͺ Tests

  • [ ] Tests have been added or updated as needed.
  • [ ] All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Refactor
    • Updated runtime configuration for FP4 GEMM operations to enhance execution performance on SM100 and SM120 GPU architectures.

yzh119 avatar Nov 15 '25 04:11 yzh119

/bot run

yzh119 avatar Nov 15 '25 04:11 yzh119

Summary of Changes

Hello @yzh119, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the performance of FP4 General Matrix Multiply (GEMM) operations by activating a previously disabled optimization flag within the CUTLASS library. The modification targets specific GPU architectures, aiming to leverage their capabilities for more efficient computation.

Highlights

  • Performance Optimization: The enablePDL flag for CUTLASS FP4 GEMM operations has been explicitly set to true to enable performance optimizations.
  • Architecture Support: This change applies to both sm_100 and sm_120 GPU architectures, indicating support for these specific compute capabilities.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

gemini-code-assist[bot] avatar Nov 15 '25 04:11 gemini-code-assist[bot]

Walkthrough

The FP4 GEMM kernel launchers for SM100 and SM120 architectures are updated to enable PDL by changing the enablePDL flag from false to true in their respective GEMM run calls.

Changes

Cohort / File(s) Summary
FP4 GEMM PDL Enable
include/flashinfer/gemm/fp4_gemm_template_sm100.h, include/flashinfer/gemm/fp4_gemm_template_sm120.h
Changed enablePDL parameter from false to true in the GEMM run calls for both SM100 and SM120 architectures. All error handling and initialization paths remain intact.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

🐰 A flip of the switch, so simple and swift, PDL now enabled, a performance gift, SM100, SM120, both shine with delight, The FP4 kernels now burn oh so bright! ✨

Pre-merge checks and finishing touches

βœ… Passed checks (3 passed)
Check name Status Explanation
Title check βœ… Passed The title 'perf: enable pdl for cutlass fp4 gemm' accurately summarizes the main change: enabling PDL for CUTLASS FP4 GEMM kernels across SM100 and SM120.
Description check βœ… Passed The PR description includes a clear explanation of the change and the rationale, though the checklist items (pre-commit checks and tests) remain unchecked and are not explicitly addressed.
Docstring Coverage βœ… Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • [ ] πŸ“ Generate docstrings
πŸ§ͺ Generate unit tests (beta)
  • [ ] Create PR with unit tests
  • [ ] Post copyable unit tests in a comment

πŸ“œ Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between cce4952fdd41b353325e11d99e1fc0b0737961ff and a028a5503c5b69c2cfa119390cc5fdc57c1f504e.

πŸ“’ Files selected for processing (2)
  • include/flashinfer/gemm/fp4_gemm_template_sm100.h (1 hunks)
  • include/flashinfer/gemm/fp4_gemm_template_sm120.h (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
πŸ“š Learning: 2025-11-12T03:35:17.583Z
Learnt from: raayandhar
Repo: flashinfer-ai/flashinfer PR: 2070
File: include/flashinfer/gemm/bf16_gemm_cutlass_template.h:145-160
Timestamp: 2025-11-12T03:35:17.583Z
Learning: In flashinfer GEMM implementations (e.g., include/flashinfer/gemm/bf16_gemm_cutlass_template.h, fp8_gemm_cutlass_template.h), it is acceptable to catch and silently ignore std::runtime_error exceptions in getWorkspaceSizeImpl when probing multiple GEMM configurations, as some configurations may legitimately fail due to SMEM constraints. This pattern should include a comment like "// Swallow errors when SMEM exceeds maximum allowed" to document the rationale.

Applied to files:

  • include/flashinfer/gemm/fp4_gemm_template_sm100.h
  • include/flashinfer/gemm/fp4_gemm_template_sm120.h
🧬 Code graph analysis (2)
include/flashinfer/gemm/fp4_gemm_template_sm100.h (2)
include/flashinfer/gemm/fp4_gemm_template_sm120.h (1)
  • gemm (42-271)
include/flashinfer/trtllm/gemm/trtllmGen_gemm_export/GemmInterface.h (2)
  • gemm (42-347)
  • gemm (468-488)
include/flashinfer/gemm/fp4_gemm_template_sm120.h (2)
include/flashinfer/gemm/fp4_gemm_template_sm100.h (1)
  • gemm (38-288)
include/flashinfer/gemm/fp4_gemm_cutlass_template_sm120.h (1)
  • gemm (44-208)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs
πŸ”‡ Additional comments (2)
include/flashinfer/gemm/fp4_gemm_template_sm100.h (1)

276-276: SM100 PDL support confirmedβ€”proceed with code change.

SM100 (Blackwell) architecture supports PDL in CUTLASS with PDL/GDC support enabled by default via build macros. The code change is technically correct. Recommended: verify test coverage for this path in your test suite to ensure the enablePDL optimization is validated.

include/flashinfer/gemm/fp4_gemm_template_sm120.h (1)

260-260: PDL support on SM120/SM121 confirmed and tested.

CUTLASS supports PDL for Blackwell (SM120/SM121) architecture. The repository includes test infrastructure with parametrized PDL testing (e.g., tests/utils/test_norm.py with device_support_pdl() runtime checks and enable_pdl parameters), and the code pattern aligns with the existing SM100 implementation. No issues identified.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

coderabbitai[bot] avatar Nov 15 '25 04:11 coderabbitai[bot]

GitLab MR !139 has been created, and the CI pipeline #38541603 is currently running. I'll report back once the pipeline job completes.

flashinfer-bot avatar Nov 15 '25 04:11 flashinfer-bot

[FAILED] Pipeline #38541603: 9/17 passed

flashinfer-bot avatar Nov 15 '25 14:11 flashinfer-bot