[Misc] Update Fused MoE weight loading
Summary:
- Splits up #6422 into two separate PRs. This is the first of the two. The second will leverage the weight loading changes introduced in this PR while adding the AWQ Fused MoE Kernel
- Refactors FusedMoE.weight_loader, to enable loading AWQ models, which have transposed weights (input_dim, output_dim) on disk. Fp16 and Fp8 models have share (input_dim, output_dim). This required more complex logic for handling indexing in the TP case and MergedColumn case
- Refactors expert_params_mapping, which was overfit to fp16 and fp8 checkpoints. This required renaming the scale parameters in fp8 which to better match the state dicts that we create in autofp8, limiting the amount of remapping we need to do in the model files
- Updates layers to use fused_topk/grouped_topk and fused_experts, rather than calling fused_moe directly, such that we can reuse the logic across fp16, fp8, and awq
Form Neural Magic Co-authored by @robertgshaw2-neuralmagic
👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.
Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).
To run full CI, you can do one of these:
- Comment
/readyon the PR - Add
readylabel to the PR - Enable auto-merge.
🚀
/ready
This LGTM but have you verified that DeepSeek MoE is okay with this PR?
yes. deepkseek, mixtral and qwen