vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[ROCm][Kernel] MoE weights padding

Open gshtras opened this issue 8 months ago • 3 comments

Optimization ported over from ROCm/vllm. Applying weight padding for MoE. The principle and rationale is similar to FP8 padding in https://github.com/vllm-project/vllm/pull/13231 except here it's for the half precision types. The optimization is more experimental and does not apply to any MoE model, therefore is disabled by default. Expanded unit tests to cover the padding case.

Performance wise, up to 10% improvement in latency numbers can be observed with this feature enabled on mistralai/Mixtral-8x22B-Instruct-v0.1 in the following configuration: bs=64;in=256;out=256;tp=8

gshtras avatar Mar 07 '25 18:03 gshtras

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

github-actions[bot] avatar Mar 07 '25 18:03 github-actions[bot]

QQ - would we ever not want to do this if we are on ROCm for MoE?

robertgshaw2-redhat avatar Mar 07 '25 22:03 robertgshaw2-redhat

QQ - would we ever not want to do this if we are on ROCm for MoE?

It has been mostly tested for Mixtral, other MoE models, especially those with custom MoE implementation may fail due to improper padding handling

gshtras avatar Mar 08 '25 00:03 gshtras

does not apply to any MoE model

I think this feature should be improved so it generally satisfies the FusedMoE interface. This seems like a footgun if it will fail on other common MoEs than just Mixtral. Could you give an example of a custom MoE impl that would fail with this?

mgoin avatar Mar 10 '25 22:03 mgoin

Hi, this feature should work for any model which extends the FusedMoe class. However, if you are only importing the fused_moe kernel to plug it into a custom layer, then it would require some caution. Here's an example to elaborate the same -In this PR ( https://github.com/vllm-project/vllm/pull/8518 ) we fixed the above for Dbrx model to exted the eniter FusedMoe class/layer instead of just importing the fused_moe kernel and then defining its own layer. That allowed padding to also work for Dbrx

divakar-amd avatar Mar 11 '25 15:03 divakar-amd

QQ - would we ever not want to do this if we are on ROCm for MoE?

We could do the same condition check just like fp8 padding:

 if (envs.VLLM_ROCM_FP8_PADDING and current_platform.is_rocm()
                and weight.stride(-1) == 1
                and (weight.stride(-2) * weight.element_size()) % 512 == 0):

charlifu avatar Mar 11 '25 15:03 charlifu

Hi, this feature should work for any model which extends the FusedMoe class. However, if you are only importing the fused_moe kernel to plug it into a custom layer, then it would require some caution. Here's an example to elaborate the same -In this PR ( #8518 ) we fixed the above for Dbrx model to exted the eniter FusedMoe class/layer instead of just importing the fused_moe kernel and then defining its own layer. That allowed padding to also work for Dbrx

There is a way to avoid this. We can also pad the weight tensor and do a slice operation on the weight, just like what we did in the fp8 padding PR #13231:

weight = F.pad(weight, (0, num_pad), "constant", 0)[..., :-num_pad]

If we do so, there is no need to have the padding_size in the fuse_moe.py, but we have to remove the requirement of weight has to be contiguous.

charlifu avatar Mar 11 '25 15:03 charlifu

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @gshtras.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar Mar 20 '25 21:03 mergify[bot]