vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Perf] Improve/Fix-regression for FA3 in High QPS regimes

Open LucasWilkinson opened this issue 5 months ago • 3 comments

Essential Elements of an Effective PR Description Checklist

  • [x] The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • [x] The test plan, such as providing test command.
  • [x] The test results, such as pasting the results comparison before and after, or e2e results
  • [ ] (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

This PR is meant to address spending excessive time in the combine phase of FA3; findings from https://github.com/vllm-project/vllm/issues/18619 . The associated vllm-flash-attn PR is: https://github.com/vllm-project/flash-attention/pull/70 see that for more details (that PR must also land first).

Perf Results

Setup

vllm serve <model> --disable-log-requests --max-num-seqs 1024  --block-size 16 --max-num-batched-tokens 2048 --no-enable-prefix-caching

Qwen/Qwen2.5-VL-3B-Instruct

vllm bench serve --model Qwen/Qwen2.5-VL-3B-Instruct --request-rate 1,5,50,100,200 --request-for-secs 5 --random-input-len 600 --random-output-len 125 --ignore-eos

Main

========================== Request Rate Sweep Summary ==========================
Request Rate | Req/s | Output Tok/s | Median TTFT (ms) | Median TPOT (ms)
-------------------------------------------------------------------------
1.0          | 2.00  | 249.71       | 32.69            | 4.66            
5.0          | 4.31  | 538.60       | 33.26            | 4.75            
50.0         | 39.92 | 4990.28      | 56.11            | 9.84            
100.0        | 63.08 | 7885.19      | 175.15           | 30.11           
200.0        | 64.94 | 8117.78      | 3329.13          | 41.89           
================================================================================

PR

========================== Request Rate Sweep Summary ==========================
Request Rate | Req/s | Output Tok/s | Median TTFT (ms) | Median TPOT (ms)
-------------------------------------------------------------------------
1.0          | 2.05  | 256.37       | 30.81            | 4.34            
5.0          | 4.31  | 538.65       | 33.29            | 4.74            
50.0         | 40.65 | 5081.02      | 46.31            | 7.87            
100.0        | 71.97 | 8996.07      | 83.14            | 18.68           
200.0        | 83.01 | 10376.26     | 1940.63          | 31.35           
================================================================================

meta-llama/Meta-Llama-3.1-8b

vllm bench serve --model meta-llama/Meta-Llama-3.1-8b --request-rate 1,5,50,100,200 --request-for-secs 5 --random-input-len 600 --random-output-len 125 --ignore-eos

Main

========================== Request Rate Sweep Summary ==========================
Request Rate | Req/s | Output Tok/s | Median TTFT (ms) | Median TPOT (ms)
-------------------------------------------------------------------------
1.0          | 0.79  | 98.28        | 31.39            | 8.28            
5.0          | 4.05  | 506.15       | 36.65            | 8.26            
50.0         | 34.03 | 4253.57      | 139.83           | 28.83           
100.0        | 40.71 | 5088.44      | 1704.81          | 53.46           
200.0        | 43.02 | 5377.35      | 7010.04          | 61.87           
================================================================================

PR

========================== Request Rate Sweep Summary ==========================
Request Rate | Req/s | Output Tok/s | Median TTFT (ms) | Median TPOT (ms)
-------------------------------------------------------------------------
1.0          | 0.79  | 98.97        | 38.57            | 7.84            
5.0          | 4.13  | 516.30       | 34.77            | 7.84            
50.0         | 36.32 | 4540.17      | 97.19            | 23.07           
100.0        | 43.69 | 5460.74      | 1572.63          | 49.53           
200.0        | 44.29 | 5536.30      | 6730.15          | 60.71           
================================================================================

Test Plan

Tested benchmarks in https://github.com/vllm-project/flash-attention/pull/70 lm-eval for Qwen/Qwen2.5-VL-3B-Instruct CI

Test Result

lm_eval

This PR

(vllm) lwilkinson@beaker:~/code/vllm$ lm_eval --model vllm --model_args pretrained=Qwen/Qwen2.5-VL-3B-Instruct --tasks gsm8k --batch_size auto
...
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.6831|±  |0.0128|
|     |       |strict-match    |     5|exact_match|↑  |0.5383|±  |0.0137|

Main

(vllm) lwilkinson@beaker:~/code/vllm$ lm_eval --model vllm --model_args pretrained=Qwen/Qwen2.5-VL-3B-Instruct --tasks gsm8k --batch_size auto
...
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.6831|±  |0.0128|
|     |       |strict-match    |     5|exact_match|↑  |0.5383|±  |0.0137|

(Optional) Documentation Update

LucasWilkinson avatar Jun 11 '25 02:06 LucasWilkinson

[!WARNING] You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

gemini-code-assist[bot] avatar Jun 11 '25 02:06 gemini-code-assist[bot]

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

github-actions[bot] avatar Jun 11 '25 02:06 github-actions[bot]

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @LucasWilkinson.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar Jun 17 '25 03:06 mergify[bot]