vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[WIP] AWQ Faster Kernels

Open casper-hansen opened this issue 1 year ago • 4 comments

New AWQ kernels have been introduced by the AWQ authors:

  • new weight packing format
  • uses semaphores during execution
  • uses a mix of GEMV and GEMM for optimal speed
  • decoding speed scales much better

Testing Model: casperhansen/mistral-instruct-v0.2-gemvfast-awq

This PR is currently implemented as a draft:

  • [x] Include new kernels and build them
  • [ ] Implement new weight loading for packed + interleaved weights. EDIT: Currently facing this issue:
  File "/workspace/vllm/vllm/model_executor/layers/linear.py", line 324, in weight_loader
    param_data = param_data.narrow(output_dim, shard_offset,
RuntimeError: start (0) + length (14336) exceeds dimension size (7168).
  • [ ] Implement forward pass

casper-hansen avatar Mar 08 '24 23:03 casper-hansen

@WoosukKwon I have used the same shapes as referenced in the original implementation, yet it does not load in vLLM for reasons I am unsure how to fix. If I add interleaving to the packed shards, nothing happens as the interleaving and packed factor cancel each other out. See the WQLinear_GEMVFast in AutoAWQ for reference.

How should we proceed to implement weight loading for this new format?

casper-hansen avatar Mar 10 '24 18:03 casper-hansen

@WoosukKwon I have used the same shapes as referenced in the original implementation, yet it does not load in vLLM for reasons I am unsure how to fix. If I add interleaving to the packed shards, nothing happens as the interleaving and packed factor cancel each other out. See the WQLinear_GEMVFast in AutoAWQ for reference.

How should we proceed to implement weight loading for this new format?

Hello, is there any progress?

shiqingzhangCSU avatar Mar 14 '24 02:03 shiqingzhangCSU

@shiqingzhangCSU currently there is no progress. if you have suggestions or fixes, please open a PR to my fork. i am hoping to have this feature in vLLM soon, but the weight loading is a blocker.

casper-hansen avatar Mar 14 '24 13:03 casper-hansen

@casper-hansen Hi, I'm meeting this same issue. To unblock, would you mind sharing which previous version of AutoAWQ works with vLLM?

itsuncheng avatar Mar 15 '24 07:03 itsuncheng

I have identified the source of the issue.

There is faulty logic in MergedColumnParallelLinear and QKVParallelLinear for the case where output_dim=1 AND packed_dim=1. awq_gemv_fast is the first quantization kernel with this case.

Working on a fix that avoids breaking GPTQ

robertgshaw2-redhat avatar Mar 30 '24 17:03 robertgshaw2-redhat

@robertgshaw2-neuralmagic any luck with this patch? I benchmarked and those kernels are really something. Great boost on my internal tests!

bratao avatar Apr 06 '24 02:04 bratao

@robertgshaw2-neuralmagic any luck with this patch? I benchmarked and those kernels are really something. Great boost on my internal tests!

@bratao I believe rob has a branch over in the neuralmagic fork. We discussed how to solve the issues and it seems there is a path forward for loading weights correctly. The forward pass also needs a modification from current state in the referenced branch, similar to the PR I recently created in AutoAWQ.

https://github.com/neuralmagic/nm-vllm/tree/awq_faster_kernel

casper-hansen avatar Apr 06 '24 09:04 casper-hansen

I merged @chu-tianxiang's PR and made some more modifications to catch up to the main branch. I will abandon this PR for now and leave it as a draft for someone else to finish.

Here is my list of issues that I was facing:

  1. We are missing the batch dimension in the input variable being passed around. This is suboptimal for heuristics, which AWQ relies on for choosing kernels. It is also not entirely clear what the input contains any longer.
  2. The forward runs but generates no output. I am not sure what is causing this, probably the obscure weight loading.
  3. The speed is much slower than benchmarked, indicating an issue either with vLLM entirely or the heuristics not being triggered correctly. For reference, it is not just a little faster, but a lot faster than previous generation of kernels.

casper-hansen avatar Apr 20 '24 19:04 casper-hansen

This should be safe to close since the optimized Marlin kernel has supported awq models for several months now https://github.com/vllm-project/vllm/pull/6612

mgoin avatar Feb 17 '25 18:02 mgoin