vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Roadmap] vLLM Roadmap Q3 2024

Open simon-mo opened this issue 1 year ago • 38 comments

Anything you want to discuss about vllm.

This document includes the features in vLLM's roadmap for Q3 2024. Please feel free to discuss and contribute, as this roadmap is shaped by the vLLM community.

Themes.

As before, we categorized our roadmap into 6 broad themes:

  • Broad model support: vLLM should support a wide range of transformer based models. It should be kept up to date as much as possible. This includes new auto-regressive decoder models, encoder-decoder models, hybrid architectures, and models supporting multi-modal inputs.
  • Excellent hardware coverage: vLLM should run on a wide range of accelerators for production AI workload. This includes GPUs, tensor accelerators, and CPUs. We will work closely with hardware vendors to ensure vLLM utilizes the greatest performance out of the chip.
  • Performance optimization:vLLM should be kept up to date with the latest performance optimization techniques. Users of vLLM can trust its performance to be competitive and strong.
  • Production level engine: vLLM should be the go-to choice for production level serving engine with a suite of features bridging the gaps from single forward pass to 24/7 service.
  • Strong OSS product: vLLM is and will be a true community project. We want it to be a healthy project with regular release cadence, good documentation, and adding new reviewers to the codebase.
  • Extensible architectures: For vLLM to grow at an even faster pace, it needs good abstractions to support a wide range of scheduling policies, hardware backends, and inference optimizations. We will work on refactoring the codebase to support that.

Broad Model Support

  • [x] Support Large Models (Arctic, Nemotron4, Llama3 400B+ when released)
    • [x] Via Pipeline Parallelism #4412
    • [x] Via FP8
  • [x] New Attention Mechanism (Jamba, Phi3-Small, etc)
  • [x] Encoder Decoder (#4837, #4888, #4942)
  • [x] Multi-Modal #4194

Help wanted:

  • [ ] Whisper and the audio API
  • [ ] Arbitrary HF model
  • [x] Chameleon (#5770)
  • [ ] Multi token prediction
  • [ ] Reward model API
  • [ ] Embedding Model Expansion (Bert, XLMRoberta) (#5447)

Hardware Support

  • [ ] A feature matrix for all the hardware that vLLM supports, and their maturity level
  • [ ] Enhanced performance benchmark across hardwares
  • [ ] Expanding features support on various hardwares
    • [ ] PagedAttention and Chunked Prefill on Inferentia
    • [ ] Chunked Prefill on Intel CPU/GPU
    • [ ] PagedAttention on Intel Gaudi
    • [x] TP and INT8 on TPU
    • [ ] Bug fixes and GEMM tuning on AMD GPUs

Performance Optimizations

  • [ ] Spec Decode Optimization (tracker)
  • [ ] APC Optimizations
  • [ ] Guided Decode Optimizations
  • [ ] API server performance
  • [ ] Quantization
    • [x] FP8/INT8 quantization improvements
    • [ ] Quantized MoEs
    • [x] AWQ Performance
    • [ ] Fused GEMM/all-reduce
  • [x] Scheduler overhead removal
  • [x] Optimize prepare input, sampling, process output

Production Features

  • [x] Chunked Prefill on by default
  • [ ] APC on by default
  • [ ] N-gram prompt lookup spec decode on by default
  • [ ] Tool use
  • [ ] Request prioritization framework

Help wanted

  • [ ] Support multiple models in the same server
  • [ ] [Feedback wanted] Disaggregated prefill: please discuss with us your use case and in what scenario it is preferred over chunked prefill.

OSS Community

  • [x] Reproducible performance benchmark on realistic workload
  • [x] CI enhancements
  • [x] Release process: minimize breaking changes and include deprecations

Help wanted

  • [ ] Documentation enhancements in general (styling, UI, explainers, tutorials, examples, etc)

Extensible Architecture

  • [ ] KV cache transfer #5557
  • [x] Distributed execution #5775
  • [ ] Improvements to scheduler and memory manager supporting new attention mechanisms
  • [ ] Performance enhancement for multi-modal processing

If any of the item you wanted is not on the roadmap, your suggestion and contribution is still welcomed! Please feel free to comment in this thread, open feature request, or create an RFC.

simon-mo avatar Jun 25 '24 00:06 simon-mo

Support multiple models in the same server

Does vLLM need the multi-model support similar like what FastChat does or something else?

Jeffwan avatar Jun 25 '24 01:06 Jeffwan

https://github.com/vllm-project/vllm/pull/2809 hello,how about this?

CSEEduanyu avatar Jun 25 '24 02:06 CSEEduanyu

Hi, the issues were mentioned in https://github.com/vllm-project/vllm/pull/5036 and should be taken into account.

jeejeelee avatar Jun 26 '24 15:06 jeejeelee

Will vLLM use Triton more to optimize operators' performance in future, or will it consider using the torch.compile mechanism more?

And are there any plans for this?

MeJerry215 avatar Jun 27 '24 06:06 MeJerry215

Hi! Is there or will there be support for the OpenAI Batch API ?

ashim-mahara avatar Jun 27 '24 19:06 ashim-mahara

I am doing for Whisper, my fork at https://github.com/mesolitica/vllm-whisper, the frontend later should compatible with OpenAI API plus able to stream output tokens, few hiccups, still trying to figure out based on T5 branch, https://github.com/vllm-project/vllm/blob/9f20ccf56b63b0b47e09069615e023287f1681f8/vllm/model_executor/layers/enc_dec_attention.py#L83

  1. still try to figure out kv cache for Encoder hidden state or else each steps will recompute Encoder hidden state.
  2. No non causal attention for Encoder and Cross Attention in Decoder, seems like all attention implementation in VLLM is for causal
  3. Reuse KV Cache Cross Attention from the first step for the next steps.

huseinzol05 avatar Jun 28 '24 11:06 huseinzol05

Able to load and infer, https://github.com/mesolitica/vllm-whisper/blob/main/examples/whisper_example.py, but the output is still trash, might be bugs related to weights or the attention, still debugging

huseinzol05 avatar Jun 28 '24 14:06 huseinzol05

Do you have plans to support Ascend 910B in the future?

jkl375 avatar Jul 01 '24 10:07 jkl375

Please consider prioritizing dynamic / just-in-time 8-bit quantization like EETQ which don't require offline quantization step. In example a current advantage of TGI is that you can load an original 16-bit hf model as int8 by just passing the --quantize eetq arg. AFAIK It's custom kernels handle outliers in higher precision during runtime, allowing it loose very little precision.

Previous mention in issues: https://github.com/vllm-project/vllm/issues/3261#issuecomment-1986438115 PR for it was opened but eventually closed: https://github.com/vllm-project/vllm/pull/3614

hibukipanim avatar Jul 03 '24 08:07 hibukipanim

deepseek-v2 and deepseek-coder-v2 are supported now. but awq or gptq version are not supported so these model are still not usable due to their huge 236B.

also MLA(Multihead Latent Attention) of there model is not supported yet.

tutu329 avatar Jul 09 '24 00:07 tutu329

Support for DoLa would be great!

amritap-ef avatar Jul 11 '24 08:07 amritap-ef

Please consider prioritizing dynamic / just-in-time 8-bit quantization like EETQ which don't require offline quantization step. In example a current advantage of TGI is that you can load an original 16-bit hf model as int8 by just passing the --quantize eetq arg. AFAIK It's custom kernels handle outliers in higher precision during runtime, allowing it loose very little precision.

Previous mention in issues: #3261 (comment) PR for it was opened but eventually closed: #3614

  • Have you tried fp8 marlin? Run with --quantization fp8 and we will quantize the weights to fp8 in place. This will be faster and more accurate than eetq [note: requires ampere +]

robertgshaw2-redhat avatar Jul 12 '24 13:07 robertgshaw2-redhat

Please consider supporting transformer-based value models such as in the vllm fork https://github.com/MARIO-Math-Reasoning/vllm and the huggingface implementation https://huggingface.co/docs/trl/models#trl.AutoModelForCausalLMWithValueHead. The only thing that changes is adding a head to the end of the model to predict a value instead of logits. This would be a powerful addition to support very fast generation search and open up the possibility of more effective methods such as MCTS compared to traditional prompt based approaches such as self-consistency, CoT, ToT, etc.

kaifronsdal avatar Jul 13 '24 23:07 kaifronsdal

Please consider supporting transformer-based value models such as in the vllm fork https://github.com/MARIO-Math-Reasoning/vllm and the huggingface implementation https://huggingface.co/docs/trl/models#trl.AutoModelForCausalLMWithValueHead. The only thing that changes is adding a head to the end of the model to predict a value instead of logits. This would be a powerful addition to support very fast generation search and open up the possibility of more effective methods such as MCTS compared to traditional prompt based approaches such as self-consistency, CoT, ToT, etc.

Thank you for your nice contribution! I wonder whether it is possible for you to fork a branch from vllm instead of creating new one so that anyone can see what changes in new contribution?

haichuan1221 avatar Jul 14 '24 01:07 haichuan1221

  • Have you tried fp8 marlin? Run with --quantization fp8 and we will quantize the weights to fp8 in place. This will be faster and more accurate than eetq [note: requires ampere +]

yes thanks @robertgshaw2-neuralmagic, was trying it in recent days and it does look promising. happy to hear you believe it's more accurate than EETQ. I can confirm that Llama-70B-Instruct got almost same MMLU score with fp8 (80.56 vs 80.7).

Would be great if it could load and quant the layers iteratively, as now if the 16bit model can't fit in the GPU, we have to quant it offline first. But the fact there is an option to do "dynamic" quant without calibration data is great. thanks for this

hibukipanim avatar Jul 14 '24 13:07 hibukipanim

  • Have you tried fp8 marlin? Run with --quantization fp8 and we will quantize the weights to fp8 in place. This will be faster and more accurate than eetq [note: requires ampere +]

yes thanks @robertgshaw2-neuralmagic, was trying it in recent days and it does look promising. happy to hear you believe it's more accurate than EETQ. I can confirm that Llama-70B-Instruct got almost same MMLU score with fp8 (80.56 vs 80.7).

Would be great if it could load and quant the layers iteratively, as now if the 16bit model can't fit in the GPU, we have to quant it offline first. But the fact there is an option to do "dynamic" quant without calibration data is great. thanks for this

It should be more accurate and much much faster - so I think we will not prioritizing adding eetq ourselves (though we will of course accept a contribution)

Iterative quantization is on my list, ideally this week.

robertgshaw2-redhat avatar Jul 14 '24 13:07 robertgshaw2-redhat

Hi! Is there or will there be support for the OpenAI Batch API ?

vLLM currently has partial support for this (#4794).

DarkLight1337 avatar Jul 17 '24 07:07 DarkLight1337

Hi! Is there or will there be support for the OpenAI Batch API ?

vLLM currently has partial support for this (#4794).

This requires a completely new instance of vLLM, It would be nice if we could just call an existing API with a batch request like you do with the OpenAI Batch API.

w013nad avatar Jul 17 '24 13:07 w013nad

Hi! Is there or will there be support for the OpenAI Batch API ?

vLLM currently has partial support for this (#4794).

This requires a completely new instance of vLLM, It would be nice if we could just call an existing API with a batch request like you do with the OpenAI Batch API.

Exactly my thoughts. I could help with the build. I already have a nano-library that does interface with OpenAI directly at ashim-mahara/odbg.

The primary problem I have identified is with tracking the request origins in-case of dynamic batching by VLLM. The first one is easier if batches are executed sequentially but they would still need to be saved on the disk somewhere for retrieval later.

ashim-mahara avatar Jul 17 '24 13:07 ashim-mahara

an existing API with a batch request like you do with the OpenAI Batch API.

@w013nad (or others), please feel free to open an RFC for this to discuss the ideal API. The main challenge is around file storage I believe.

simon-mo avatar Jul 17 '24 18:07 simon-mo

Hopefully, the function_call and tool_choice features will be implemented faster and will additionally support models like Qwen2

warlockedward avatar Jul 23 '24 06:07 warlockedward

Hi all,

CPU Optimizations to support GGUF models !!

My thoughts are, Adding CPU optimizations to the vLLM makes it more robust.

  • I know that ipex has already been added to the project
  • Project like Llamacpp has been a go to inference server when it comes to running models in lower precisions on CPU, even it is providing a http server to host a gguf model, but the problem with Llamacpp is it won't handle parallel requests like vLLM handles it.
  • I've tested Llamacpp server for Performance values for llama3-8b quantized model (with int4 precision), results are very promissing.
  • Adding the support for running quantized models (GGUF) on CPU using vLLM server would be a very considerable object for this roadmap

If anyone already looking into this please let me know, I want to work on this part, I'm open to help/contribute to this

Thanks

akhilreddy0703 avatar Jul 30 '24 18:07 akhilreddy0703

Hopefully, the function_call and tool_choice features will be implemented faster and will additionally support models like Qwen2

ollama already support tool use in from version 0.3.0 see: https://ollama.com/blog/tool-support

dongfangduoshou123 avatar Jul 31 '24 09:07 dongfangduoshou123

Any chance that you guys can implement Dry Repetition Penalty? I sorely miss it from backends like Oobabooga or Kobold.

fodevac33 avatar Aug 02 '24 16:08 fodevac33

We want to see more improvement on compiler since this is the major gap between vLLM and TRT-LLM (with meylin compiler) support.

B.t.w, what's your opinion with SGLang (they extensively use torch.compile to optimize the ML workload) and their released benchmark? @simon-mo

Hi all,

CPU Optimizations to support GGUF models !!

My thoughts are, Adding CPU optimizations to the vLLM makes it more robust.

  • I know that ipex has already been added to the project
  • Project like Llamacpp has been a go to inference server when it comes to running models in lower precisions on CPU, even it is providing a http server to host a gguf model, but the problem with Llamacpp is it won't handle parallel requests like vLLM handles it.
  • I've tested Llamacpp server for Performance values for llama3-8b quantized model (with int4 precision), results are very promissing.
  • Adding the support for running quantized models (GGUF) on CPU using vLLM server would be a very considerable object for this roadmap

If anyone already looking into this please let me know, I want to work on this part, I'm open to help/contribute to this

Thanks

@akhilreddy0703 #5191 has just been merged, providing support for GGUF models.

DarkLight1337 avatar Aug 06 '24 10:08 DarkLight1337

Hi, I would like to contribute to the Reward model API, do you have any suggestions or ideas in mind for this feature?

gabrielmbmb avatar Aug 08 '24 15:08 gabrielmbmb

Hi, I would like to contribute to the Reward model API, do you have any suggestions or ideas in mind for this feature?

A good start point might be some API similar to this https://github.com/OpenRLHF/OpenRLHF/pull/391/files

tsaoyu avatar Aug 09 '24 16:08 tsaoyu

Support multiple models in the same server

Does vLLM need the multi-model support similar like what FastChat does or something else?

Up for this, support multiple models or models at different version had good use case in the era of synthetic data. But I would suggest expose this feature in Engine level. My current recipe is using LangChain to abstract a layer on top of Ray, Ray is in charge of distributed model loading and inference.

tsaoyu avatar Aug 09 '24 16:08 tsaoyu

Is there a way to pass in custom decoding config in offline inference mode for different prompts i.e. use outlines to generate custom json output per prompt? It seems that currently, it is only possible to pass in a single decoding config to use for all prompts so would be great to have this feature!

amritap-ef avatar Aug 13 '24 21:08 amritap-ef