[Kernel] Enable 8-bit weights in Fused Marlin MoE
This PR covers the remaining work of PR #7079 . It enables 8-bit weights in the fused Marlin kernel without modifying the codepaths that run the kernel and also adds 8-bit tests to test_moe.py.
Tested e2e by running offline_inference.py with
llm = LLM(model="nm-testing/Mixtral-8x7B-Instruct-v0.1-W8A16-quantized")
and
llm = LLM(model="TheBloke/Mixtral-8x7B-v0.1-GPTQ",
revision="gptq-8bit-128g-actorder_True")
👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.
Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).
To run full CI, you can do one of these:
- Comment
/readyon the PR - Add
readylabel to the PR - Enable auto-merge.
🚀
/ready
Just left one quick comment. I'm going to pull this PR in and try it with a compressed-tensors W8A16 model.
Confirmed this works with compressed-tensors w8a16
Still need to test with a deepseek-v2 model
@ElizaWszola seems like the kernel test failures start after tests/kernels/test_moe.py - could you take a look?