vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Core] feat: Implement Priority Scheduling in V1 Engine

Open amitm02 opened this issue 6 months ago • 1 comments

This commit introduces priority scheduling capabilities to the V1 LLM engine.

Key changes include:

  1. EngineCoreRequest and Request updates:

    • Added a priority field to EngineCoreRequest and Request classes to carry priority information.
  2. Processor update:

    • Modified Processor.process_inputs to accept and pass the priority to EngineCoreRequest.
  3. V1 Scheduler modifications:

    • The scheduler now respects the --scheduling-policy argument.
    • When policy="priority", self.waiting is managed as a min-heap, prioritizing requests by their assigned priority value (lower value means higher priority) and then by arrival time (FCFS).
    • Preemption logic now correctly identifies and preempts the actual lowest-priority running request when space is needed for higher-priority or new requests.
    • FCFS behavior is maintained when policy="fcfs".
  4. Documentation:

    • Updated docs/usage/v1_guide.md and docs/serving/openai_compatible_server.md to reflect V1 engine's support for priority scheduling.
  5. Unit Tests:

    • Added a new test suite in tests/v1/core/test_scheduler.py.

This allows you to influence the order of request processing in the V1 engine by assigning priorities, which is particularly useful in scenarios with varying request importance.

FIX #14002

amitm02 avatar May 26 '25 07:05 amitm02

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

github-actions[bot] avatar May 26 '25 07:05 github-actions[bot]

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @amitm02.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar May 30 '25 15:05 mergify[bot]

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @amitm02.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar Jun 01 '25 14:06 mergify[bot]

the commit history is in a mess, can you clean it up? maybe open another PR?

youkaichao avatar Jun 02 '25 08:06 youkaichao

the commit history is in a mess, can you clean it up? maybe open another PR?

Re-submitted as https://github.com/vllm-project/vllm/pull/19057

amitm02 avatar Jun 03 '25 07:06 amitm02