unsloth
unsloth copied to clipboard
unsloth with vllm in 8/4 bits
I have trained qlora model with unsloth and I want to serve with vllm but I did not found a way to serve model in8/4 bits ?
@quancore I'm not sure / unsure if vLLM allows serving in 4 or 8 bits! 16bit yes, but unsure on 4 or 8
@danielhanchen I think it is: https://github.com/vllm-project/vllm/issues/1155
@danielhanchen I think it is: vllm-project/vllm#1155
Looks like they only support AWQ quantization not via bitsandbytes.
@patleeman Oh ye AWQ is great - I'm assuming you want to quantize it to AWQ?
@patleeman @danielhanchen well yes, maybe we should support AWQ so we can use qlora models with vllm?
Hello there. I am also interested in using with VLLM a 8/4 bits model trained with Unsloth. Currently, it works fine with 16 bits but requires too much VRAM. Is there a way to quantize a model trained with Unsloth using AWQ or GPTQ?
Whoops this missed me - yep having an option to convert it to AWQ is interesting
Whoops this missed me - yep having an option to convert it to AWQ is interesting
That would be amazing - is this a feature you are planning on adding in the near future?
Yep for a future release!
I'm down to volunteer to work on this, if you're accepting community contributions. (I have to do this for my day job anyway, so it might be nice to contribute to the library.)
@amir-in-a-cynch do you plan to do it?
@amir-in-a-cynch do you plan to do it?
I'll take a stab at it tomorrow and wednesday. Not sure if it'll end up being a clean integration to the API for this library (since it adds a dependency), but at the worst case we should be able to get an example notebook together on how to do it for the docs.
@amir-in-a-cynch great, keep me in touch I don't mind giving you a helping hand if you're stuck at some point
I think vLLM exporting to 8bits is through AWQ - you can also enable float8 support (if your GPU supports it)