text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

Add support to vLLM inference engine - to possibly gain x10 speedup in inference

Open ofirkris opened this issue 2 years ago • 19 comments

vLLM is an open-source LLM inference and serving library that accelerates HuggingFace Transformers by 24x and powers Vicuna and Chatbot Arena.

Blog post: https://vllm.ai/ Repo: https://github.com/vllm-project/vllm

ofirkris avatar Jun 20 '23 20:06 ofirkris

If the performance claims aren't overcooked or super situational, this could be huge

Slug-Cat avatar Jun 20 '23 22:06 Slug-Cat

AI is where you have some of the brightest minds in the world working on some of the most complicated maths and somehow someone just comes and does something like this (assuming it's real).

Are we in an "AI summer"? 😂

CamiloMM avatar Jun 21 '23 00:06 CamiloMM

It's Exlllama for everything else.. and can just have a new loader added.

Ph0rk0z avatar Jun 21 '23 11:06 Ph0rk0z

vLLM only speeds up 24x for running full fat models with massive parallelization, so if you need to run 100 inferences at the same time, its fast. But for most people, exllama is still faster/better. @turboderp has some good insights on the local llama reddit.

Unless someone is feeling ambitious, I think this could be closed. The issue poster probably didnt understand what vLLM is really for.

tensiondriven avatar Jun 22 '23 23:06 tensiondriven

Does tensor parallel help multi-gpu? And with the multi-user support this might actually serve the intended purpose.

Ph0rk0z avatar Jun 24 '23 14:06 Ph0rk0z

Does anyone know anything about this? image

cibernicola avatar Jul 08 '23 11:07 cibernicola

I'm not sure how they arrive at those results. Plain HF Transformers can be mighty slow, but you have to really try to make it that slow, I feel. As for vLLM, it's not for quantized models, and as such it's quite a bit slower than ExLllama (or Llama.cpp with GPU acceleration for that matter.) If you're deploying a full-precision model to serve inference to multiple clients it might be very useful, though.

turboderp avatar Jul 08 '23 23:07 turboderp

@oobabooga

https://github.com/oobabooga/text-generation-webui/pull/4794#issuecomment-1837714017

As we do not consider adding new model loader for single mode, we should consider vllm now, as it is freqently support newly release models like Qwen, with both multi-client servering and quantization (AWQ) https://github.com/vllm-project/vllm

yhyu13 avatar Dec 04 '23 15:12 yhyu13

@oobabooga is this on the roadmap?

rafa-9 avatar Jan 08 '24 04:01 rafa-9

Seems it's not coming for now at least https://github.com/oobabooga/text-generation-webui/pull/4860

nonetrix avatar Jan 25 '24 09:01 nonetrix

This should be re-considered, the concerns of plaguing the codebase with CUDA dependants is true.. we should address the design constraints to make this happen and not close the door entirely to something that potentially can benefit ooga's tool. I guess you could serve externally an OpenAI format from a VLLM model and override such thing at ooga's side. It could be merely a different script with different requirements to hack this up?

@oobabooga what could be the acceptance criteria? I do feel very handy serve/eval/play at the same time in a friendly eco like ooga's.

fblgit avatar Feb 07 '24 10:02 fblgit

VLLM has gradually introduced support for GPTQ and AWQ models, with imminent plans to accommodate the as-yet-unmerged QLORA and QALORA models. Moreover, the acceleration effects delivered by VLLM are now strikingly evident. Given these developments, I propose considering the incorporation of VLLM support. The project is rapidly evolving and poised for a promising future.

micsama avatar Apr 23 '24 08:04 micsama

+1 for vllm vllm now becomes the first choice when we need LLM serves on line. it's not only a IT ditributed for more throughput thing, but also accelerated on batch=1. it has flash attention\ page attention... some how i found someone here has misunderstandings of that "paralleling is only for more tps, not for batch=1", high parallel, or as parallel as you can , is good for batch=1 according to cuda design.

eigen2017 avatar May 06 '24 10:05 eigen2017

for example , vllm manages all the tokens' kv caches in to blocks, it can be faster even batch is 1.

eigen2017 avatar May 06 '24 10:05 eigen2017

yeah vLLM support should be added.

KnutJaegersberg avatar May 11 '24 07:05 KnutJaegersberg

I will say the Vllm is popular with vision models and with recent addition of multimodal support I think VLLM be a great fit tbh

JavaGamer avatar Aug 17 '25 21:08 JavaGamer

It's definitely a mature library now, so I think it would be good to integrate it.

wkdtjs avatar Aug 25 '25 10:08 wkdtjs

I don't think it'll be added actually! :(

oliverban avatar Aug 28 '25 20:08 oliverban

+1 Could we please add support for vLLM?

sumukhballal avatar Oct 18 '25 08:10 sumukhballal