vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Roadmap] vLLM Roadmap Q2 2024

Open simon-mo opened this issue 1 year ago • 18 comments

This document includes the features in vLLM's roadmap for Q2 2024. Please feel free to discuss and contribute to the specific features at related RFC/Issues/PRs and add anything else you'd like to talk about in this issue.

You can see our historical roadmap at #2681, #244. This roadmap contains work committed by the vLLM team from UC Berkeley, as well as the broader vLLM contributor groups including but not limited to Anyscale, IBM, NeuralMagic, Roblox, Oracle Cloud. You can also find help wanted items in this roadmap as well! Additionally, this roadmap is shaped by you, our user community!

Themes.

We categorized our roadmap into 6 broad themes:

  • Broad model support: vLLM should support a wide range of transformer based models. It should be kept up to date as much as possible. This includes new auto-regressive decoder models, encoder-decoder models, hybrid architectures, and models supporting multi-modal inputs.
  • Excellent hardware coverage: vLLM should run on a wide range of accelerators for production AI workload. This includes GPUs, tensor accelerators, and CPUs. We will work closely with hardware vendors to ensure vLLM utilizes the greatest performance out of the chip.
  • Performance optimization:vLLM should be kept up to date with the latest performance optimization techniques. Users of vLLM can trust its performance to be competitive and strong.
  • Production level engine: vLLM should be the go-to choice for production level serving engine with a suite of features bridging the gaps from single forward pass to 24/7 service.
  • Strong OSS product: vLLM is and will be a true community project. We want it to be a healthy project with regular release cadence, good documentation, and adding new reviewers to the codebase.
  • Extensible architectures: For vLLM to grow at an even faster pace, it needs good abstractions to support a wide range of scheduling policies, hardware backends, and inference optimizations. We will work on refactoring the codebase to support that.

Broad Model Support

  • [ ] Encoder Decoder Models
    • [ ] T5 #3117
    • [ ] Whisper
    • [ ] Embedding #3187
  • [ ] Hybrid Architecture (Jamba) #3690
  • [ ] Decoder Only Embedding Models #3734
  • [ ] Prefix tuning support

Help Wanted:

  • [ ] More vision transformers beyond llava
  • [ ] Support private model registration #172
  • [ ] Control vector support #3451
  • [ ] Fallback support for arbitrary transformers text generation model
  • [ ] Long context investigation of LongRoPE
  • [ ] RMKV

Excellent Hardware Coverage

  • [ ] AMD MI300x: enhancing fp8 performance [enable FP8 compute]
  • [ ] NVIDIA H100: enhancing fp8 performance
  • [ ] AWS Trainium and Inferentia
  • [ ] Google TPU
  • [ ] Intel GPU and CPU
  • [ ] Intel Gaudi

Performance Optimization

  • Speculative decoding
    • [ ] Speculative decoding framework for top-1 proposals w/draft model
    • [ ] Proposer improvement: Prompt-lookup n-gram speculations
    • [ ] Scoring improvement: Make batch expansion optional
    • [ ] Scoring improvement: dynamic scoring length policy
  • Kernels:
    • [ ] FlashInfer integration #2767
    • [ ] Sampler optimizations leveraging triton compiler
  • Quantization:
    • [ ] FP8 format support for NVIDIA Ammo and AMD Quantizer
    • [ ] Weight only quantization (Marlin) improvements: act_order, int8, Exllama2 compatibility, fused MoE, AWQ kernels.
    • [ ] Activation quantization (W8A8, FP8, etc)
    • [ ] Quantized lora support #3225
    • [x] AQLM quantization
  • [ ] Constrained decoding performance (batch, async, acceleration) and extensibility (Outlines #3715, LMFormatEnforcer #3713, AICI #2888 )

Help Wanted:

  • Sparse kv cache (H2O, compression, FastDecode)
  • Speculative decoding
    • [ ] Proposer/scoring/verifier improvement: Top-k “tree attention” proposals for Eagle/Medusa/Draft model
    • [ ] Proposer improvement: RAG n-gram speculations
    • [ ] Proposer improvement: Eagle/Medusa top-1 proposals
    • [ ] Proposer improvement: Quantized draft models
    • [ ] Verifier improvement: Typical acceptance

Production Level Engine

  • Scheduling
    • [ ] Prototype Disaggregated prefill (#2370)
    • [ ] Speculative decoding fully merged in (#2188)
    • [ ] Turn chunked prefill/sarathi/splitfuse on by default (#3538)
  • Memory management
    • [ ] Automatic prefix caching enhancement
  • [ ] TGI feature parity (stop string handling, logging and metrics, test improvements)
  • [ ] Provide non-ray option for single node inference
  • [ ] Optimize api server performance
  • [ ] OpenAI server feature completeness (function calling) (#3237)
  • Model Loading
    • [x] Optimize model weights loading by directly loading from hub/s3 #3533
    • [ ] Fully offline mode

Help Wanted:

  • [ ] Logging serving FLOPs for performance analysis
  • [ ] Dynamic LoRA adapter downloads from hub/S3

Strong OSS Product

  • [ ] Continuous benchmarks (resource needed!)
  • [ ] Commit to 2wk release cadence
  • [ ] Growing reviewer and committer base
  • Better docs
    • [ ] doc: memory and performance tuning guide
    • [ ] doc: apc documentation
    • [ ] doc: hardware support levels, feature matrix, and policies
    • [ ] doc: guide to horizontally scale up vLLM service
    • [ ] doc: developer guide for adding new draft based models or draft-less optimizations
  • [ ] Automatic CD of nightly wheels and docker images

Help Wanted:

  • ARM aarch-64 support for AWS Graviton based instances and GH200
  • Full correctness test with HuggingFace transformers. Resources needed.
  • Well tested support for lm-eval-harness (logprobs, get tokenizers)
  • Local development workflow without cuda

Extensible Architecture

  • [ ] Prototype pipeline parallelism
  • [ ] Extensible memory manager
  • [ ] Extensible scheduler
  • [ ] torch.compile investigations
    • [ ] use compile for quantization kernel fusion
    • [ ] use compile for future proofing graph mode
    • [ ] use compile for xpu or other accelerators
  • Architecture for queue management and request prioritization
  • Streaming LLM, prototype it on new block manager
  • Investigate Tensor + Pipeline parallelism (LIGER)

simon-mo avatar Apr 04 '24 22:04 simon-mo

@simon-mo for prefill disaggregation. from the splitwise and distserve paper, they all build solution on top of vLLM for evaluation. Any contribution from these teams? is vLLM community open for public contribution for this feature?

Jeffwan avatar Apr 05 '24 00:04 Jeffwan

@Jeffwan yes! We are actively with the authors of both papers to integrate the work properly. We are also working with Sarathi's authors for chunked prefill as well.

simon-mo avatar Apr 05 '24 00:04 simon-mo

Any update for PEFT?

please consider support huggingface peft, thank you. #1129

kanseaveg avatar Apr 05 '24 02:04 kanseaveg

Hi @kanseaveg, we do support LoRA and planning to add prefix tuning support, which should allow Hugging face PEFT model format. Which PEFT methods are you interested in?

simon-mo avatar Apr 05 '24 05:04 simon-mo

@simon-mo Thank you very much for your reply.There are three common types of tuning methods that I am currently concerned about:

  • prefix-tuning / p-tuning v2
  • adapter-tuning
  • lora-tuning (currently supported) I hope the vllm framework can support this, which is what I mentioned in Q3 last year and Q1 this year. Thank you very much for your reply.

kanseaveg avatar Apr 05 '24 05:04 kanseaveg

Maybe consider supporting QuaRot quantization scheme?

QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs We introduce QuaRot, a new Quantization scheme based on Rotations, which is able to quantize LLMs end-to-end, including all weights, activations, and KV cache in 4 bits. QuaRot rotates LLMs in a way that removes outliers from the hidden state without changing the output, making quantization easier. This computational invariance is applied to the hidden state (residual) of the LLM, as well as to the activations of the feed-forward components, aspects of the attention mechanism and to the KV cache. The result is a quantized model where all matrix multiplications are performed in 4-bits, without any channels identified for retention in higher precision. Our quantized LLaMa2-70B model has losses of at most 0.29 WikiText-2 perplexity and retains 99% of the zero-shot performance. Code is available at: this https URL.

I think this would be huge for larger models like Command-R+ (104B) being able to fit into a single 80G A100 with negligible performance losses.

irdbl avatar Apr 05 '24 13:04 irdbl

Very excited to see both Embedding models and CPU support on the roadmap!

These being implemented would make vLLM my default model serving engine.

zbloss avatar Apr 05 '24 16:04 zbloss

Very excited to see that the tensorizer PR is in this roadmap! Sorry about all the pings, I'm just passionate about getting this to vLLM users :D More than happy to be of any assistance in getting that feature implemented :)

sangstar avatar Apr 05 '24 17:04 sangstar

Will larger vocabulary size for multi-lora be supported in Q2 2024? Related: https://github.com/vllm-project/vllm/issues/3000

PenutChen avatar Apr 08 '24 01:04 PenutChen

I'm very interested in implementing tree attention for speculative decoding. @simon-mo

yukavio avatar Apr 09 '24 07:04 yukavio

Will larger vocabulary size for multi-lora be supported in Q2 2024? Related: #3000

https://github.com/vllm-project/vllm/pull/4015 had done this

jeejeelee avatar Apr 12 '24 10:04 jeejeelee

Will larger vocabulary size for multi-lora be supported in Q2 2024? Related: #3000

#4015 had done this

This is strange, serving lora finetune for Llama-3 (vocab size 12800) has the same problem, When using LoRA, vocab size must be 32000 >= vocab_size <= 33024, however same code finetune for Qwen1.5-7B-Chat, with vocab size 151643, has no such serving problem, why?

qZhang88 avatar Apr 19 '24 14:04 qZhang88

Will larger vocabulary size for multi-lora be supported in Q2 2024? Related: #3000

#4015 had done this

This is strange, serving lora finetune for Llama-3 (vocab size 12800) has the same problem, When using LoRA, vocab size must be 32000 >= vocab_size <= 33024, however same code finetune for Qwen1.5-7B-Chat, with vocab size 151643, has no such serving problem, why?

the function create_lora_weights from LogitsProcessorWithLoRA throws this error.

Model using llama architecture designate lm_head as a target module for lora, and need instantiate LogitsProcessorWithLoRA,refer to: https://github.com/vllm-project/vllm/blob/main/vllm/lora/models.py#438

Models such as qwen-2 don't designate lm_head as a target module for lora,so,They don't instantiate LogitsProcessorWithLoRA

jeejeelee avatar Apr 19 '24 15:04 jeejeelee

Will larger vocabulary size for multi-lora be supported in Q2 2024? Related: #3000

#4015 had done this

This is strange, serving lora finetune for Llama-3 (vocab size 12800) has the same problem, When using LoRA, vocab size must be 32000 >= vocab_size <= 33024, however same code finetune for Qwen1.5-7B-Chat, with vocab size 151643, has no such serving problem, why?

the function create_lora_weights from LogitsProcessorWithLoRA throws this error.

Model using llama architecture designate lm_head as a target module for lora, and need instantiate LogitsProcessorWithLoRA,refer to: https://github.com/vllm-project/vllm/blob/main/vllm/lora/models.py#438

Models such as qwen-2 don't designate lm_head as a target module for lora,so,They don't instantiate LogitsProcessorWithLoRA

I see, but lm_head is not finetuned during lora, so there is no need to replace logits_processor. In my adapter_config.json, target_modules does not contains lm_head

  "target_modules": [
    "gate_proj",
    "v_proj",
    "q_proj",
    "o_proj",
    "up_proj",
    "k_proj",
    "down_proj"
  ],

qZhang88 avatar Apr 20 '24 01:04 qZhang88

Will larger vocabulary size for multi-lora be supported in Q2 2024? Related: #3000

#4015 had done this

This is strange, serving lora finetune for Llama-3 (vocab size 12800) has the same problem, When using LoRA, vocab size must be 32000 >= vocab_size <= 33024, however same code finetune for Qwen1.5-7B-Chat, with vocab size 151643, has no such serving problem, why?

the function create_lora_weights from LogitsProcessorWithLoRA throws this error. Model using llama architecture designate lm_head as a target module for lora, and need instantiate LogitsProcessorWithLoRA,refer to: https://github.com/vllm-project/vllm/blob/main/vllm/lora/models.py#438 Models such as qwen-2 don't designate lm_head as a target module for lora,so,They don't instantiate LogitsProcessorWithLoRA

I see, but lm_head is not finetuned during lora, so there is no need to replace logits_processor. In my adapter_config.json, target_modules does not contains lm_head

  "target_modules": [
    "gate_proj",
    "v_proj",
    "q_proj",
    "o_proj",
    "up_proj",
    "k_proj",
    "down_proj"
  ],

vllm support multi-lora, whether to replace logits_processor is determined by the model's support_modules, not by the adapter_config.json.

jeejeelee avatar Apr 20 '24 02:04 jeejeelee

would like to help with #620

Vermeille avatar Apr 25 '24 10:04 Vermeille

how about Single-process architecture to Multi-process architecture ? is this job still working?

CSEEduanyu avatar Apr 30 '24 07:04 CSEEduanyu

@Jeffwan yes! We are actively with the authors of both papers to integrate the work properly. We are also working with Sarathi's authors for chunked prefill as well.

Looking forward to the release of vllm support for Prefill-Decode Disaggregation feature

WangErXiao avatar May 04 '24 03:05 WangErXiao

@simon-mo Hi, How about https://arxiv.org/abs/2404.18057? It seems to have a significant advantage in long sequences, and it does not conflict with page-attention technology.

colourful-tree avatar May 08 '24 06:05 colourful-tree

@simon-mo Any thing update about the #3117 ? This issue was raised in February, and it has been nearly three months. We sincerely look forward to your updating in this regard, thank you. image

kanseaveg avatar May 10 '24 00:05 kanseaveg

@simon-mo Any thing update about the https://github.com/vllm-project/vllm/pull/3117 ? This issue was raised in February, and it has been nearly three months. We sincerely look forward to your updating in this regard, thank you.

Still in progress. @robertgshaw2-neuralmagic can help comment more.

simon-mo avatar May 10 '24 00:05 simon-mo

Do you have plans to incorporate RISC-V or ARM CPU backends into the vLLM project? Thank you.

zxy-zzz avatar May 11 '24 09:05 zxy-zzz

@simon-mo I have an implementation of "Speculative Decoding - Proposer improvement: Eagle/Medusa top-1 proposals" (#4669). I will be creating the PR after I've done some more testing.

I can also start work on Typical acceptance.

abhigoyal1997 avatar May 12 '24 05:05 abhigoyal1997

We should consider long-context optimizations for Q3.

  • e.g. things like https://github.com/feifeibear/long-context-attention

robertgshaw2-redhat avatar May 17 '24 15:05 robertgshaw2-redhat

Hi - with smaller models being popular these days - I'm wondering, if for Q3, there are any plans for data parallelism support (loading the same model onto gpu's as copies)

If not - I can help with this

sumukshashidhar avatar May 19 '24 01:05 sumukshashidhar

do you have plan to support nvidia device jetson with aarch64 ?

johnsonwag03 avatar May 19 '24 13:05 johnsonwag03

Hi - with smaller models being popular these days - I'm wondering, if for Q3, there are any plans for data parallelism support (loading the same model onto gpu's as copies)

If not - I can help with this

Are you thinking this would be something handled internally by LLMEngine or a new front end that stands in front?

If handled internally, this will require significant changes to the core logic.

Also, if this is targeted at offline batch mode, perhaps we will see some gains, though I suspect not too much since we can saturate the GPU via batching even with TP

If this is targeted at online serving, I do not think we should be implementing a load balancer in vLLM. This should be handled by higher level orchestrators like kuberentes or ray

robertgshaw2-redhat avatar May 19 '24 14:05 robertgshaw2-redhat

Hi - with smaller models being popular these days - I'm wondering, if for Q3, there are any plans for data parallelism support (loading the same model onto gpu's as copies) If not - I can help with this

Are you thinking this would be something handled internally by LLMEngine or a new front end that stands in front?

If handled internally, this will require significant changes to the core logic.

Also, if this is targeted at offline batch mode, perhaps we will see some gains, though I suspect not too much since we can saturate the GPU via batching even with TP

If this is targeted at online serving, I do not think we should be implementing a load balancer in vLLM. This should be handled by higher level orchestrators like kuberentes or ray

My particular use-case is automatic large offline batches, for which I have a hotfix - I spin up multiple OpenAI servers, and distribute the prompts among them. Curiously, I see large speedups when I do this, as opposed to TP.

Also, if this is targeted at offline batch mode, perhaps we will see some gains, though I suspect not too much since we can saturate the GPU via batching even with TP.

I'm not sure if this is a bug or something else, because I did indeed see large speedups with this, when I completely removed ray worker communication (some digging said that the overhead is not worth it). If this is not expected, I can try out some experiments and post them here. (This may be an artifact of me having a PCIE GPU cluster, not sped up by NVLINK)

sumukshashidhar avatar May 19 '24 16:05 sumukshashidhar

Hi - with smaller models being popular these days - I'm wondering, if for Q3, there are any plans for data parallelism support (loading the same model onto gpu's as copies) If not - I can help with this

Are you thinking this would be something handled internally by LLMEngine or a new front end that stands in front? If handled internally, this will require significant changes to the core logic. Also, if this is targeted at offline batch mode, perhaps we will see some gains, though I suspect not too much since we can saturate the GPU via batching even with TP If this is targeted at online serving, I do not think we should be implementing a load balancer in vLLM. This should be handled by higher level orchestrators like kuberentes or ray

My particular use-case is automatic large offline batches, for which I have a hotfix - I spin up multiple OpenAI servers, and distribute the prompts among them. Curiously, I see large speedups when I do this, as opposed to TP.

Also, if this is targeted at offline batch mode, perhaps we will see some gains, though I suspect not too much since we can saturate the GPU via batching even with TP.

I'm not sure if this is a bug or something else, because I did indeed see large speedups with this, when I completely removed ray worker communication (some digging said that the overhead is not worth it). If this is not expected, I can try out some experiments and post them here. (This may be an artifact of me having a PCIE GPU cluster, not sped up by NVLINK)

Okay great. We would welcome a contribution focused on the offline batch processing case.

Could you make an RFC issue to discuss a potential design? I think we should try hard to not modify LLMEngine and see if we can handle things in the LLM class

robertgshaw2-redhat avatar May 19 '24 16:05 robertgshaw2-redhat

Very excited to see function calling support in OpenAI-Compatible server is in this roadmap! This is quite helpful when using LangChain.

fenggwsx avatar May 31 '24 13:05 fenggwsx

@Jeffwan yes! We are actively with the authors of both papers to integrate the work properly. We are also working with Sarathi's authors for chunked prefill as well.

Hi @simon-mo. Is there any update about splitwise? It seems that the development of https://github.com/vllm-project/vllm/pull/2809 has stopped.

irasin avatar Jun 03 '24 07:06 irasin

Would love to see updates to the docs on how to use supported vision models, embedding models, and the new support for tools with forced tool choice (auto tool choice is still WIP as I understand)

K-Mistele avatar Jun 16 '24 21:06 K-Mistele