vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Roadmap] vLLM Roadmap Q4 2024

Open simon-mo opened this issue 1 year ago • 6 comments

This page is accessible via roadmap.vllm.ai

Themes.

As before, we categorized our roadmap into 6 broad themes: broad model support, wide hardware coverage, state of the art performance optimization, production level engine, strong OSS community, and extensible architectures. As we are seeing more

Broad Model Support

  • [ ] Enhance LLM Support
    • [ ] Hybrid/Interleaved Attention (#9464)
  • [ ] Enhance Multi-Modality in vLLM (#4194)
  • [ ] Enhance Support for State Space Models (Mamba)
  • [ ] Reward Model API (#8967)
  • [ ] Arbitrary HF model (a collaboration with Hugging Face!)
  • [ ] Whisper

Help wanted:

  • [ ] Expand coverage for encoder-decoder models (Bert, XLMRoberta, BGE, T5) (#5447)
  • [ ] API for streaming input (in particular for audio)

Hardware Support

  • [ ] A feature matrix for all the hardware that vLLM supports, and their maturity level
  • [ ] Expanding features support on various hardwares
    • [ ] Fast PagedAttention and Chunked Prefill on Inferentia
    • [ ] Upstream of Intel Gaudi
    • [ ] Enhancements in TPU Support
    • [ ] Upstream enhancements in AMD MI300x
    • [ ] Performance enhancement and measurement for NVIDIA H200
    • [ ] New accelerator support: IBM Spyre

Help wanted:

  • [ ] Design for pluggable, out-of-tree hardware backend similar to PyTorch’s PrivateUse API
  • [ ] Prototype JAX support

Performance Optimizations

  • [ ] Turn on chunked prefill, prefix caching, speculative decoding by default
  • [ ] Optimizations for structured outputs
  • [ ] Fused GEMM/all-reduce leveraging Flux and AsyncTP
  • [ ] Enhancement and overhead-removal in offline LLM use cases.
  • [ ] Better kernels (FA3, FlashInfer, FlexAttention, Triton)
  • [ ] Native integration with torch.compile

Help wanted:

  • [ ] A fast ngrams speculator
  • [ ] Sparse KV cache framework (#5751)
  • [ ] Long context optimizations: context parallelism, etc.

Production Features

  • [ ] KV cache offload to CPU and disk
  • [ ] Disaggregated Prefill
  • [ ] More control in prefix caching, and scheduler policies
  • [ ] Automated speculative decoding policy, see Dynamic Speculative Decoding

Help wanted

  • [ ] Support multiple models in the same server

OSS Community

  • [ ] Enhancements in performance benchmark: more realistic workload, more hardware backends (H200s)
  • [ ] Better developer documentations for getting started with contribution and research

Help wanted

  • [ ] Documentation enhancements in general (styling, UI, explainers, tutorials, examples, etc)

Extensible Architecture

  • [ ] Full support for torch.compile
  • [ ] vLLM Engine V2: Asynchronous Scheduling and Prefix Caching Centric Design (#8779)
  • [ ] A generic memory manager supporting multi-modality, sparsity, and others

If any of the items you wanted is not on the roadmap, your suggestion and contribution is still welcomed! Please feel free to comment in this thread, open feature request, or create an RFC.

Historical Roadmap: #5805, #3861, #2681, #244

simon-mo avatar Oct 01 '24 17:10 simon-mo

Support for KV cache compression

  • [ ] upstream https://github.com/IsaacRe/vllm-kvcompress/tree/main - related issues (3532, 5751)

IsaacRe avatar Oct 02 '24 19:10 IsaacRe

Do we have plans to support https://github.com/vllm-project/vllm/issues/5540? We are having a production level use case and would really appreciate if someone can look into it for Q4 onwards.

ksjadeja avatar Oct 04 '24 17:10 ksjadeja

Hi, do we have any follow-up issue or Slack channel for the "KV cache offload to CPU and disk" task? Our team has previously explored some "KV cache offload" work based on vLLM, and we’d be happy to join any relevant discussion or contribute to the development if there's such chance~

Personally, also looking forward to know more about "More control in prefix caching, and scheduler policies" part😊.

yangsijia-serena avatar Oct 12 '24 06:10 yangsijia-serena

@simon-mo hi,regarding the topic “KV cache offload to CPU and disk”, I previously implemented a version that stores kv cache in a local file(https://github.com/vllm-project/vllm/pull/8018). Of course, I also did relevant abstractions and can add other media. Is there a slack channel for this? We can discuss the specific scheme. I am also quite interested in this function.

zeroorhero avatar Oct 12 '24 06:10 zeroorhero

@sylviayangyy @zeroorhero thank you for your interests! Yes. @KuntaiDu has created a #feat-kvcache-offloading to discuss that.

simon-mo avatar Oct 14 '24 18:10 simon-mo

Do we have plans to support #5540? We are having a production level use case and would really appreciate if someone can look into it for Q4 onwards.

It looks like LoRA is now supported. Are you encountering any issues?

jeejeelee avatar Oct 16 '24 14:10 jeejeelee

Any plans on improving guided decoding? There's a long standing RFC for it (#5423) and previous attempts have been made (e.g. #6273). Unfortunately seems to have been forgotten since.

In particular I'd love to see it become async (logit mask or biases can be calculated while GPU is working on calculating logits) and fast forwarding tokens when the next few tokens are deterministic.

iiLaurens avatar Oct 19 '24 21:10 iiLaurens

Whether there is an opportunity to participate in changes related to speculative decoding? I'm working on some of the practices that are going to help you

HuYunhai-Alex avatar Oct 19 '24 21:10 HuYunhai-Alex

Any plans on improving guided decoding? There's a long standing RFC for it (#5423) and previous attempts have been made (e.g. #6273). Unfortunately seems to have been forgotten since.

In particular I'd love to see it become async (logit mask or biases can be calculated while GPU is working on calculating logits) and fast forwarding tokens when the next few tokens are deterministic.

I second this. We are using vLLM to host our production inference servers and all of our downstream applications rely on guided json decoding to ensure that output is parsable. There is a significant performance difference between guided and non-guided decoding and any performance improvements would be helpful to increase throughput.

devdev999 avatar Oct 22 '24 06:10 devdev999

Any plans on improving guided decoding? There's a long standing RFC for it (#5423) and previous attempts have been made (e.g. #6273). Unfortunately seems to have been forgotten since. In particular I'd love to see it become async (logit mask or biases can be calculated while GPU is working on calculating logits) and fast forwarding tokens when the next few tokens are deterministic.

I second this. We are using vLLM to host our production inference servers and all of our downstream applications rely on guided json decoding to ensure that output is parsable. There is a significant performance difference between guided and non-guided decoding and any performance improvements would be helpful to increase throughput.

Hey, I maintain the guidance project and we worked on the first proposal in #6273 . Looks like vLLM has changed significantly since then, but if there is appetite for upgraded/more performant guided decoding work from the maintainers, we're happy to take another look and investigate a new PR. In particular, guidance (and our high performance rust implementation in llguidance already does async computations on CPU, calculates fast forward tokens, etc. and is typically accelerative for JSON schema.

@JC1DA @mmoskal

Harsha-Nori avatar Oct 22 '24 23:10 Harsha-Nori

Do we have plans to support #5540? We are having a production level use case and would really appreciate if someone can look into it for Q4 onwards.

It looks like LoRA is now supported. Are you encountering any issues?

Yes, if we look at the class in mixtral_quant.py, it does not have SupportsLora which means lora is not supported for quantized Mixtral. but for mixtral.py, we have SupportsLora included in MixtralForCausalLM. I have a LORA adapter trained which I want to use on top of mixtral-awq model without merging, directly as a hot swap. Let me know if you know a better way to tackle this situation

ksjadeja avatar Oct 29 '24 05:10 ksjadeja

Do we have plans to support #5540? We are having a production level use case and would really appreciate if someone can look into it for Q4 onwards.

It looks like LoRA is now supported. Are you encountering any issues?

Yes, if we look at the class in mixtral_quant.py, it does not have SupportsLora which means lora is not supported for quantized Mixtral. but for mixtral.py, we have SupportsLora included in MixtralForCausalLM. I have a LORA adapter trained which I want to use on top of mixtral-awq model without merging, directly as a hot swap. Let me know if you know a better way to tackle this situation

I'm guessing you explicitly set the quantization, right? If so, you can try removing that argument and test it out, like the following script:

llm = LLM(
    model="Mixtral-8x7B-Instruct-v0.1-GPTQ",
    trust_remote_code=True,
    gpu_memory_utilization=0.6,
    enable_lora=True,
)

jeejeelee avatar Oct 29 '24 07:10 jeejeelee

Any plans on improving guided decoding? There's a long standing RFC for it (#5423) and previous attempts have been made (e.g. #6273). Unfortunately seems to have been forgotten since. In particular I'd love to see it become async (logit mask or biases can be calculated while GPU is working on calculating logits) and fast forwarding tokens when the next few tokens are deterministic.

I second this. We are using vLLM to host our production inference servers and all of our downstream applications rely on guided json decoding to ensure that output is parsable. There is a significant performance difference between guided and non-guided decoding and any performance improvements would be helpful to increase throughput.

Hey, I maintain the guidance project and we worked on the first proposal in #6273 . Looks like vLLM has changed significantly since then, but if there is appetite for upgraded/more performant guided decoding work from the maintainers, we're happy to take another look and investigate a new PR. In particular, guidance (and our high performance rust implementation in llguidance already does async computations on CPU, calculates fast forward tokens, etc. and is typically accelerative for JSON schema.

@JC1DA @mmoskal

Improvements in guided generation performance would be very welcome. There is a helpful comment by @stas00 from last month with a nice summary of where things currently stand.

dbuades avatar Oct 29 '24 21:10 dbuades

Do we have plans to support #5540? We are having a production level use case and would really appreciate if someone can look into it for Q4 onwards.

It looks like LoRA is now supported. Are you encountering any issues?

Yes, if we look at the class in mixtral_quant.py, it does not have SupportsLora which means lora is not supported for quantized Mixtral. but for mixtral.py, we have SupportsLora included in MixtralForCausalLM. I have a LORA adapter trained which I want to use on top of mixtral-awq model without merging, directly as a hot swap. Let me know if you know a better way to tackle this situation

I'm guessing you explicitly set the quantization, right? If so, you can try removing that argument and test it out, like the following script:

llm = LLM(
    model="Mixtral-8x7B-Instruct-v0.1-GPTQ",
    trust_remote_code=True,
    gpu_memory_utilization=0.6,
    enable_lora=True,
)

Tried this, but does not work. I get the same error. Just mentioning that I use awq quantized model [rank0]: ValueError: Model MixtralForCausalLM does not support LoRA, but LoRA is enabled. Support for this model may be added in the future. If this is important to you, please open an issue on github.

ksjadeja avatar Oct 30 '24 02:10 ksjadeja

Do we have plans to support #5540? We are having a production level use case and would really appreciate if someone can look into it for Q4 onwards.

It looks like LoRA is now supported. Are you encountering any issues?

Yes, if we look at the class in mixtral_quant.py, it does not have SupportsLora which means lora is not supported for quantized Mixtral. but for mixtral.py, we have SupportsLora included in MixtralForCausalLM. I have a LORA adapter trained which I want to use on top of mixtral-awq model without merging, directly as a hot swap. Let me know if you know a better way to tackle this situation

I'm guessing you explicitly set the quantization, right? If so, you can try removing that argument and test it out, like the following script:

llm = LLM(
    model="Mixtral-8x7B-Instruct-v0.1-GPTQ",
    trust_remote_code=True,
    gpu_memory_utilization=0.6,
    enable_lora=True,
)

Tried this, but does not work. I get the same error. Just mentioning that I use awq quantized model [rank0]: ValueError: Model MixtralForCausalLM does not support LoRA, but LoRA is enabled. Support for this model may be added in the future. If this is important to you, please open an issue on github.

Which vllm version are you using?

According to the code in https://github.com/vllm-project/vllm/blob/v0.6.3.post1/vllm/model_executor/model_loader/utils.py#L30, both GPTQ and AWQ quantization methods should be compatible when using version 0.6.3post1

jeejeelee avatar Oct 30 '24 02:10 jeejeelee

Any interest in vAttention? https://github.com/vllm-project/vllm/issues/4675

Edenzzzz avatar Nov 11 '24 03:11 Edenzzzz

More and more speech model is using a LLM to predict non-text tokens. Like ChatTTS or FishTTS, they are all using a llama to predict speech tokens. But unlike llama for text, the speech-llama will use a multiple lm_head to predict more than 1 tokens in parallel, and therefor sum the n-tokens embedding when processing the llm input embedding . I am currently trying to make chattts running with vllm, see here, but lots code need to update and seems break some fundamental design. So maybe you can consider support it officially. It will definitely make more impact to the speech solutions.

niuzheng168 avatar Nov 14 '24 03:11 niuzheng168

Any plans on improving guided decoding? There's a long standing RFC for it (#5423) and previous attempts have been made (e.g. #6273). Unfortunately seems to have been forgotten since. In particular I'd love to see it become async (logit mask or biases can be calculated while GPU is working on calculating logits) and fast forwarding tokens when the next few tokens are deterministic.

I second this. We are using vLLM to host our production inference servers and all of our downstream applications rely on guided json decoding to ensure that output is parsable. There is a significant performance difference between guided and non-guided decoding and any performance improvements would be helpful to increase throughput.

Hey, I maintain the guidance project and we worked on the first proposal in #6273 . Looks like vLLM has changed significantly since then, but if there is appetite for upgraded/more performant guided decoding work from the maintainers, we're happy to take another look and investigate a new PR. In particular, guidance (and our high performance rust implementation in llguidance already does async computations on CPU, calculates fast forward tokens, etc. and is typically accelerative for JSON schema. @JC1DA @mmoskal

Improvements in guided generation performance would be very welcome. There is a helpful comment by @stas00 from last month with a nice summary of where things currently stand.

Do we have plans to improve concurrency performance for guided decoding? Enabling guided_json for concurrent requests results in significant throughput and latency degradation. (#3567)

Enhancements in concurrency performance for guided decoding would greatly benefit high-volume, real-time applications.

kentoym avatar Nov 14 '24 19:11 kentoym

Quick update -- we've made an initial PR to support guidance as a backend, which does improve performance over current implementations (https://github.com/vllm-project/vllm/pull/10217). Of course, better support for concurrency in general would also help guidance get significantly faster. Happy to support there and help if we can too!

@JC1DA

Harsha-Nori avatar Nov 15 '24 02:11 Harsha-Nori

I am interested in optimizations related to speculative decoding. Is there an opportunity to get involved?

wanghongyu2001 avatar Nov 24 '24 12:11 wanghongyu2001

I have a somewhat similar question to @wanghongyu2001: if someone is interested in contributing to a specific aspect of vLLM, what’s the recommended path to get involved? Specifically, are there any suggested learning resources to systematically understand the vLLM codebase and, in particular, the v1 architecture?

In addition to navigating through the codebase, are there other structured ways to ramp up, such as design docs, or suggested youtube videos (in case I miss anything), any important PRs/files worth reading through? Would be thrilled to dive in and contribute to the project. Any guidance would be much appreciated!

Toubat avatar Nov 24 '24 23:11 Toubat

Any plans on improving guided decoding? There's a long standing RFC for it (#5423) and previous attempts have been made (e.g. #6273). Unfortunately seems to have been forgotten since. In particular I'd love to see it become async (logit mask or biases can be calculated while GPU is working on calculating logits) and fast forwarding tokens when the next few tokens are deterministic.

I second this. We are using vLLM to host our production inference servers and all of our downstream applications rely on guided json decoding to ensure that output is parsable. There is a significant performance difference between guided and non-guided decoding and any performance improvements would be helpful to increase throughput.

Hey, I maintain the guidance project and we worked on the first proposal in #6273 . Looks like vLLM has changed significantly since then, but if there is appetite for upgraded/more performant guided decoding work from the maintainers, we're happy to take another look and investigate a new PR. In particular, guidance (and our high performance rust implementation in llguidance already does async computations on CPU, calculates fast forward tokens, etc. and is typically accelerative for JSON schema. @JC1DA @mmoskal

Improvements in guided generation performance would be very welcome. There is a helpful comment by @stas00 from last month with a nice summary of where things currently stand.

Do we have plans to improve concurrency performance for guided decoding? Enabling guided_json for concurrent requests results in significant throughput and latency degradation. (#3567)

Enhancements in concurrency performance for guided decoding would greatly benefit high-volume, real-time applications.

We could definitely use a thread pool to process logits list in parallel. As VLLM can run different number of logits processors for each logits in a batch, batched logits processor seems complex to implement. However, using thread-pool also requires some mandatory changes from the guided decoding libraries:

  1. it must be thread-safe. From what I experimented so far, lm-format-enforcer seems to be not thread-safe and failed in some tests if running with a thread pool
  2. Pytorch in-place operations removal, again these ops failed if using in thread pool
  3. efficient implementation to release GIL immediately after called

Also, I think VLLM is capable of providing multiple output tokens per sequence per step, we can leverage it for fast-forwarded tokens in JSON guided generation (super beneficial to improve performance)

JC1DA avatar Nov 26 '24 01:11 JC1DA

Interested in thoughts/plan on EXL2 support: https://github.com/vllm-project/vllm/issues/3203

gpgn avatar Nov 27 '24 09:11 gpgn

Any plans on improving guided decoding? There's a long standing RFC for it (#5423) and previous attempts have been made (e.g. #6273). Unfortunately seems to have been forgotten since. In particular I'd love to see it become async (logit mask or biases can be calculated while GPU is working on calculating logits) and fast forwarding tokens when the next few tokens are deterministic.

I second this. We are using vLLM to host our production inference servers and all of our downstream applications rely on guided json decoding to ensure that output is parsable. There is a significant performance difference between guided and non-guided decoding and any performance improvements would be helpful to increase throughput.

Hey, I maintain the guidance project and we worked on the first proposal in #6273 . Looks like vLLM has changed significantly since then, but if there is appetite for upgraded/more performant guided decoding work from the maintainers, we're happy to take another look and investigate a new PR. In particular, guidance (and our high performance rust implementation in llguidance already does async computations on CPU, calculates fast forward tokens, etc. and is typically accelerative for JSON schema. @JC1DA @mmoskal

Improvements in guided generation performance would be very welcome. There is a helpful comment by @stas00 from last month with a nice summary of where things currently stand.

Do we have plans to improve concurrency performance for guided decoding? Enabling guided_json for concurrent requests results in significant throughput and latency degradation. (#3567)

Enhancements in concurrency performance for guided decoding would greatly benefit high-volume, real-time applications.

Integrating xgrammar could be a good choice: https://github.com/mlc-ai/xgrammar .

dongxiaolong avatar Nov 29 '24 02:11 dongxiaolong

[ ] Better kernels (FA3, FlashInfer, FlexAttention, Triton)

What kernel is VLLM using as of right now? Asking in consideration of #10780

jannikstdl avatar Nov 29 '24 15:11 jannikstdl

Hello,

I noticed that you have already merged the PR regarding this bug【function_name: Union[str, None] = current_tool_call.get("name")】. Could you please inform me which version of the latest supported image resolves this issue? Additionally, could you share the timeline for releasing the new version image? Currently, I am using version vllm/vllm-openai:v0.6.3.post1.

Thank you.

yumc2573 avatar Dec 18 '24 02:12 yumc2573

Fused GEMM/all-reduce leveraging Flux and AsyncTP Looking forward to this optimization and hoping to use it as soon as possible. Has it been implemented yet?

double-vin avatar Mar 27 '25 10:03 double-vin

Good news

double-vin avatar Mar 31 '25 03:03 double-vin

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

github-actions[bot] avatar Jun 30 '25 02:06 github-actions[bot]

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

github-actions[bot] avatar Jul 30 '25 02:07 github-actions[bot]