vllm
vllm copied to clipboard
[WIP] AWQ Faster Kernels
New AWQ kernels have been introduced by the AWQ authors:
- new weight packing format
- uses semaphores during execution
- uses a mix of GEMV and GEMM for optimal speed
- decoding speed scales much better
Testing Model: casperhansen/mistral-instruct-v0.2-gemvfast-awq
This PR is currently implemented as a draft:
- [x] Include new kernels and build them
- [ ] Implement new weight loading for packed + interleaved weights. EDIT: Currently facing this issue:
File "/workspace/vllm/vllm/model_executor/layers/linear.py", line 324, in weight_loader
param_data = param_data.narrow(output_dim, shard_offset,
RuntimeError: start (0) + length (14336) exceeds dimension size (7168).
- [ ] Implement forward pass
@WoosukKwon I have used the same shapes as referenced in the original implementation, yet it does not load in vLLM for reasons I am unsure how to fix. If I add interleaving to the packed shards, nothing happens as the interleaving and packed factor cancel each other out. See the WQLinear_GEMVFast in AutoAWQ for reference.
How should we proceed to implement weight loading for this new format?
@WoosukKwon I have used the same shapes as referenced in the original implementation, yet it does not load in vLLM for reasons I am unsure how to fix. If I add interleaving to the packed shards, nothing happens as the interleaving and packed factor cancel each other out. See the WQLinear_GEMVFast in AutoAWQ for reference.
How should we proceed to implement weight loading for this new format?
Hello, is there any progress?
@shiqingzhangCSU currently there is no progress. if you have suggestions or fixes, please open a PR to my fork. i am hoping to have this feature in vLLM soon, but the weight loading is a blocker.
@casper-hansen Hi, I'm meeting this same issue. To unblock, would you mind sharing which previous version of AutoAWQ works with vLLM?
I have identified the source of the issue.
There is faulty logic in MergedColumnParallelLinear and QKVParallelLinear for the case where output_dim=1 AND packed_dim=1. awq_gemv_fast is the first quantization kernel with this case.
Working on a fix that avoids breaking GPTQ
@robertgshaw2-neuralmagic any luck with this patch? I benchmarked and those kernels are really something. Great boost on my internal tests!
@robertgshaw2-neuralmagic any luck with this patch? I benchmarked and those kernels are really something. Great boost on my internal tests!
@bratao I believe rob has a branch over in the neuralmagic fork. We discussed how to solve the issues and it seems there is a path forward for loading weights correctly. The forward pass also needs a modification from current state in the referenced branch, similar to the PR I recently created in AutoAWQ.
https://github.com/neuralmagic/nm-vllm/tree/awq_faster_kernel
I merged @chu-tianxiang's PR and made some more modifications to catch up to the main branch. I will abandon this PR for now and leave it as a draft for someone else to finish.
Here is my list of issues that I was facing:
- We are missing the batch dimension in the
inputvariable being passed around. This is suboptimal for heuristics, which AWQ relies on for choosing kernels. It is also not entirely clear what theinputcontains any longer. - The forward runs but generates no output. I am not sure what is causing this, probably the obscure weight loading.
- The speed is much slower than benchmarked, indicating an issue either with vLLM entirely or the heuristics not being triggered correctly. For reference, it is not just a little faster, but a lot faster than previous generation of kernels.
This should be safe to close since the optimized Marlin kernel has supported awq models for several months now https://github.com/vllm-project/vllm/pull/6612