[V1][PP] Support PP for MultiprocExecutor
By offloading MQ reading to a IO thread, MultiprocExecutor can receive multiple batches in flight and support V1 PP seamlessly.
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
Also cc @ruisearch42
also cc @njhill and @tlrmchlsmth for the mp-related change.
Thanks for @comaniac @ruisearch42 @youkaichao 's comments, I have resolved them all.
I have some initial benchmark results. On a 4xL40 platform:
# TP=2 PP=1
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=2 -pp=1 --max-model-len=8192
Throughput: 5.60 requests/s, 2314.67 total tokens/s, 1110.17 output tokens/s
# TP=1 PP=2
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=1 -pp=2 --max-model-len=8192
Throughput: 5.74 requests/s, 2374.38 total tokens/s, 1138.81 output tokens/s
# TP=1 PP=4
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=4 -pp=1 --max-model-len=8192
Throughput: 7.93 requests/s, 3277.62 total tokens/s, 1572.03 output tokens/s
# TP=1 PP=4
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=1 -pp=4 --max-model-len=8192
Throughput: 9.28 requests/s, 3838.89 total tokens/s, 1841.22 output tokens/s
# TP=2 PP=2
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=2 -pp=2 --max-model-len=8192
Throughput: 9.69 requests/s, 4005.70 total tokens/s, 1921.23 output tokens/s
Hi @bigPYJ1151, can you please rebase the PR and resolve merge conflicts?
Hi @bigPYJ1151, can you please rebase the PR and resolve merge conflicts?
@WoosukKwon Sure, updated, also verified the unit tests. Please take a look :)
@ruisearch42 @comaniac @youkaichao Can you please take a final look by any chance?
@bigPYJ1151 I've just started the CI test. Will merge once it becomes green.
@WoosukKwon All required became green, please help to merge, thanks :)