vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[V1][PP] Support PP for MultiprocExecutor

Open bigPYJ1151 opened this issue 9 months ago • 4 comments

By offloading MQ reading to a IO thread, MultiprocExecutor can receive multiple batches in flight and support V1 PP seamlessly.

bigPYJ1151 avatar Mar 04 '25 16:03 bigPYJ1151

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

github-actions[bot] avatar Mar 04 '25 16:03 github-actions[bot]

Also cc @ruisearch42

comaniac avatar Mar 04 '25 17:03 comaniac

also cc @njhill and @tlrmchlsmth for the mp-related change.

youkaichao avatar Mar 05 '25 07:03 youkaichao

Thanks for @comaniac @ruisearch42 @youkaichao 's comments, I have resolved them all.

I have some initial benchmark results. On a 4xL40 platform:

# TP=2 PP=1
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=2 -pp=1 --max-model-len=8192 
Throughput: 5.60 requests/s, 2314.67 total tokens/s, 1110.17 output tokens/s

# TP=1 PP=2
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=1 -pp=2 --max-model-len=8192 
Throughput: 5.74 requests/s, 2374.38 total tokens/s, 1138.81 output tokens/s

# TP=1 PP=4
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=4 -pp=1 --max-model-len=8192 
Throughput: 7.93 requests/s, 3277.62 total tokens/s, 1572.03 output tokens/s

# TP=1 PP=4
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=1 -pp=4 --max-model-len=8192 
Throughput: 9.28 requests/s, 3838.89 total tokens/s, 1841.22 output tokens/s

# TP=2 PP=2
VLLM_USE_V1=1 python3 benchmark_throughput.py --backend=vllm --dataset=./ShareGPT_V3_unfiltered_cleaned_split.json --model=meta-llama/Meta-Llama-3-8B-Instruct --n=1 --num-prompts=1000 --trust-remote-code --disable-log-stats -tp=2 -pp=2 --max-model-len=8192 
Throughput: 9.69 requests/s, 4005.70 total tokens/s, 1921.23 output tokens/s

bigPYJ1151 avatar Mar 05 '25 13:03 bigPYJ1151

Hi @bigPYJ1151, can you please rebase the PR and resolve merge conflicts?

WoosukKwon avatar Apr 28 '25 16:04 WoosukKwon

Hi @bigPYJ1151, can you please rebase the PR and resolve merge conflicts?

@WoosukKwon Sure, updated, also verified the unit tests. Please take a look :)

bigPYJ1151 avatar Apr 29 '25 14:04 bigPYJ1151

@ruisearch42 @comaniac @youkaichao Can you please take a final look by any chance?

WoosukKwon avatar Apr 29 '25 19:04 WoosukKwon

@bigPYJ1151 I've just started the CI test. Will merge once it becomes green.

WoosukKwon avatar May 06 '25 08:05 WoosukKwon

@WoosukKwon All required became green, please help to merge, thanks :)

bigPYJ1151 avatar May 06 '25 11:05 bigPYJ1151