vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Core] Adding Priority Scheduling

Open apatke opened this issue 1 year ago • 4 comments

FILL IN THE PR DESCRIPTION HERE

There are three major changes implemented:


  1. Addition of a new priority scheduling policy to the scheduler config. Also adds a user-defined priority variable to sequence.

  2. All requests in the running queue and the waiting queue are sorted first based on this priority. If there is a tie, it falls back to the FCFS policy.

  3. Force preemption of request from the running queue back into the waiting queue. If there are requests in the running queue whose priority is lower than the requests in the waiting queue, they are forcefully preempted out back into the waiting queue to allow immediate execution of the higher priority request.

@njhill @saurabhjha1 @youkaichao @simon-mo

FIX #6077 (link existing issues this PR will resolve)

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

apatke avatar Jun 28 '24 12:06 apatke

@apatke you need to run ./format.sh on the code to fix the linter errors.

njhill avatar Jun 28 '24 16:06 njhill

Something we had been discussing is whether it would make sense for the API to take some kind of scheduing_params dataclass containing the priority, to allow for fields related to future scheduling policy additions without having to add them all as separate top-level parameters.

njhill avatar Jun 28 '24 17:06 njhill

Performance slowdown from _schedule_priority_preemption is <4% with the priority policy for Llama 8B. No performance degradation when policy is not enabled.

priority: Throughput: 14.56 requests/s, 6052.59 tokens/s
fcfs: Throughput: 15.15 requests/s, 6299.22 tokens/s

apatke avatar Sep 06 '24 18:09 apatke

@youkaichao Would you be able to take a look at the PR?

apatke avatar Sep 09 '24 15:09 apatke

will take a look when i have time :)

youkaichao avatar Sep 20 '24 22:09 youkaichao

hi, why this priority scheduling not support AsyncLLMEngine?

ZJUFangzh avatar Sep 26 '24 09:09 ZJUFangzh

@youkaichao @njhill Do you know if somebody is already working on supporting this in AsyncLLMEngine? If not I could go ahead open a PR.

schoennenbeck avatar Sep 26 '24 11:09 schoennenbeck

Port can be found here: https://github.com/vllm-project/vllm/pull/8850

schoennenbeck avatar Sep 26 '24 12:09 schoennenbeck

Thanks for your effort! How can I use it with openai client? The vllm version I'm using is vllm-0.6.3.dev152+gde895f16.d20241010. Currently, I'm putting priority into 'extra_body' of client.chat.completions.create:

{'model': 'hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4', 'stream': False, 'max_tokens': 20480, 'temperature': 0, 'n': 1, 'seed': 42, 'extra_body': {'top_k': 1, 'priority': 10}}

Is it right?

tonyaw avatar Oct 10 '24 06:10 tonyaw

@tonyaw Yes, that should be correct. The support was added in another PR. Remember that a lower value for priority means earlier handling (this is in line with how python's queue.PriorityQueue works).

schoennenbeck avatar Oct 10 '24 06:10 schoennenbeck

@schoennenbeck Thanks for your prompt response! Next question is: When the preemption can handle by priority?

        if len(prefills.seq_groups
               ) == 0 and self.scheduler_config.policy == "priority":
            self._schedule_priority_preemption(budget)

Does it mean as long as there is one request in prefill stage, the priority_preemption will never be triggered?

tonyaw avatar Oct 10 '24 07:10 tonyaw

Also, I realized if "--enable-chunked-prefill" is set, priority scheduling won't be triggered. To get better performance, I need to enable chunked-prefill, but in this case, I can't use priority scheduling any more. May I ask the reason?

tonyaw avatar Oct 10 '24 08:10 tonyaw

@apatke @schoennenbeck , I met some issue, and think current code has some issue: https://github.com/vllm-project/vllm/issues/9272 Could you please help to check? Thanks in advance! :-)

tonyaw avatar Oct 11 '24 06:10 tonyaw

@apatke and @schoennenbeck , as I mentioned in https://github.com/vllm-project/vllm/issues/9272, even if the priority is propagated successfully, vllm always crashes as long as preemption happens. I just tested with vllm-0.6.3.dev173+g36ea7907.d20241011. The only change I made is following fix and some logs: https://github.com/vllm-project/vllm/pull/9277

Could you please help to guide me how to WR it? Also, I realized --enable_chunked_prefill is default be True for Llama3.1 as it is a long context length model. Why enable_chunked_prefill can't work with priority scheduling together? It will reduce the vllm performance a lot.

Reproduce procedure:

  1. Start vllm:
python3 -m vllm.entrypoints.openai.api_server --model hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 \
        --host 0.0.0.0 --port 8080  --seed 42 --trust-remote-code --scheduling-policy priority \
        --tensor-parallel-size 2 --max-num-seqs 10 --enable_chunked_prefill False
  1. Use openai client to make a 15 concurrent load.
  2. Use another openai client to send some requests with priority -100.
  3. As long as preemption is triggered, vllm crashes:
INFO 10-11 06:51:31 engine.py:292] Added request chat-29ccd21f37fb4a8ea96c1c8c189a6a49.
INFO 10-11 06:51:31 engine.py:294] tonyaw:Added request -100.
INFO 10-11 06:51:31 scheduler.py:1025] tonyaw: len(prefills.seq_groups) = 0
INFO 10-11 06:51:31 scheduler.py:1025] tonyaw: len(prefills.seq_groups) = 0
INFO 10-11 06:51:31 scheduler.py:807] tonyaw: _schedule_priority_preemption: waiting_queue is not None.
INFO 10-11 06:51:31 scheduler.py:808] tonyaw: seq_group chat-29ccd21f37fb4a8ea96c1c8c189a6a49 priority:-100
INFO 10-11 06:51:31 scheduler.py:837] tonyaw: _schedule_priority_preemption: vseq_group chat-55b8bdce4ae14eb8869839561fac50f9 is pop up, and will preempt.
WARNING 10-11 06:51:31 scheduler.py:1493] Sequence group chat-55b8bdce4ae14eb8869839561fac50f9 is preempted by PreemptionMode.RECOMPUTE mode because there is not enough KV cache space. This can affect the end-to-end performance. Increase gpu_memory_utilization or tensor_parallel_size to provide more KV cache memory. total_num_cumulative_preemption=1
INFO 10-11 06:51:31 model_runner_base.py:120] Writing input of failed execution to /tmp/err_execute_model_input_20241011-065131.pkl...
WARNING 10-11 06:51:31 model_runner_base.py:143] Failed to pickle inputs of failed execution: Can't get local object 'weak_bind.<locals>.weak_bound'
INFO:     10.254.17.246:54142 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
INFO:     10.254.17.246:54086 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
ERROR 10-11 06:51:31 engine.py:160] ValueError('Error in model execution: seq_group.get_last_latency() should not be called if the seq_group is in prefill phase.')
ERROR 10-11 06:51:31 engine.py:160] Traceback (most recent call last):
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner_base.py", line 116, in _wrapper
ERROR 10-11 06:51:31 engine.py:160]     return func(*args, **kwargs)
ERROR 10-11 06:51:31 engine.py:160]            ^^^^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1698, in execute_model
ERROR 10-11 06:51:31 engine.py:160]     model_input.async_callback()
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 1122, in weak_bound
ERROR 10-11 06:51:31 engine.py:160]     unbound(inst, *args, **kwargs)
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 1210, in _process_model_outputs
ERROR 10-11 06:51:31 engine.py:160]     self.do_log_stats(scheduler_outputs, outputs, finished_before,
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 1543, in do_log_stats
ERROR 10-11 06:51:31 engine.py:160]     stats = self._get_stats(scheduler_outputs, model_output,
ERROR 10-11 06:51:31 engine.py:160]             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 1664, in _get_stats
ERROR 10-11 06:51:31 engine.py:160]     latency = seq_group.get_last_latency(now)
ERROR 10-11 06:51:31 engine.py:160]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/sequence.py", line 772, in get_last_latency
ERROR 10-11 06:51:31 engine.py:160]     raise ValueError(
ERROR 10-11 06:51:31 engine.py:160] ValueError: seq_group.get_last_latency() should not be called if the seq_group is in prefill phase.
ERROR 10-11 06:51:31 engine.py:160] 
ERROR 10-11 06:51:31 engine.py:160] The above exception was the direct cause of the following exception:
ERROR 10-11 06:51:31 engine.py:160] 
ERROR 10-11 06:51:31 engine.py:160] Traceback (most recent call last):
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 158, in start
ERROR 10-11 06:51:31 engine.py:160]     self.run_engine_loop()
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 221, in run_engine_loop
ERROR 10-11 06:51:31 engine.py:160]     request_outputs = self.engine_step()
ERROR 10-11 06:51:31 engine.py:160]                       ^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 239, in engine_step
ERROR 10-11 06:51:31 engine.py:160]     raise e
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 230, in engine_step
ERROR 10-11 06:51:31 engine.py:160]     return self.engine.step()
ERROR 10-11 06:51:31 engine.py:160]            ^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 1386, in step
ERROR 10-11 06:51:31 engine.py:160]     outputs = self.model_executor.execute_model(
ERROR 10-11 06:51:31 engine.py:160]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.py", line 82, in execute_model
ERROR 10-11 06:51:31 engine.py:160]     driver_outputs = self._driver_execute_model(execute_model_req)
ERROR 10-11 06:51:31 engine.py:160]                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 155, in _driver_execute_model
ERROR 10-11 06:51:31 engine.py:160]     return self.driver_worker.execute_model(execute_model_req)
ERROR 10-11 06:51:31 engine.py:160]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker_base.py", line 327, in execute_model
ERROR 10-11 06:51:31 engine.py:160]     output = self.model_runner.execute_model(
ERROR 10-11 06:51:31 engine.py:160]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 10-11 06:51:31 engine.py:160]     return func(*args, **kwargs)
ERROR 10-11 06:51:31 engine.py:160]            ^^^^^^^^^^^^^^^^^^^^^
ERROR 10-11 06:51:31 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner_base.py", line 146, in _wrapper
ERROR 10-11 06:51:31 engine.py:160]     raise type(err)(f"Error in model execution: "
ERROR 10-11 06:51:31 engine.py:160] ValueError: Error in model execution: seq_group.get_last_latency() should not be called if the seq_group is in prefill phase.

tonyaw avatar Oct 11 '24 14:10 tonyaw

@tonyaw Hi, I'm curious that does "Why enable_chunked_prefill can't work with priority scheduling together" fixed?

justadogistaken avatar Mar 27 '25 02:03 justadogistaken

Is this usable now? I don't find the usage guide anywhere in the doc

hxt365 avatar Apr 02 '25 08:04 hxt365