vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Kernel] Enable 8-bit weights in Fused Marlin MoE

Open ElizaWszola opened this issue 1 year ago • 5 comments

This PR covers the remaining work of PR #7079 . It enables 8-bit weights in the fused Marlin kernel without modifying the codepaths that run the kernel and also adds 8-bit tests to test_moe.py.

Tested e2e by running offline_inference.py with

llm = LLM(model="nm-testing/Mixtral-8x7B-Instruct-v0.1-W8A16-quantized")

and

llm = LLM(model="TheBloke/Mixtral-8x7B-v0.1-GPTQ",
          revision="gptq-8bit-128g-actorder_True")

ElizaWszola avatar Aug 30 '24 13:08 ElizaWszola

👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

github-actions[bot] avatar Aug 30 '24 13:08 github-actions[bot]

/ready

dsikka avatar Aug 30 '24 16:08 dsikka

Just left one quick comment. I'm going to pull this PR in and try it with a compressed-tensors W8A16 model.

Confirmed this works with compressed-tensors w8a16

dsikka avatar Aug 30 '24 17:08 dsikka

Still need to test with a deepseek-v2 model

dsikka avatar Aug 30 '24 20:08 dsikka

@ElizaWszola seems like the kernel test failures start after tests/kernels/test_moe.py - could you take a look?

dsikka avatar Sep 04 '24 21:09 dsikka