vllm
vllm copied to clipboard
[Benchmark][New Dataset]Added benchmark support for Unsloth Vision Datasets
Essential Elements of an Effective PR Description Checklist
- [x] The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
- [x] The test plan, such as providing test command.
- [x] The test results, such as pasting the results comparison before and after, or e2e results
- [x] (Optional) The necessary documentation update, such as updating
supported_models.mdandexamplesfor a new model.
Purpose
This PR adds support for two Hugging Face datasets from Unsloth for vision benchmarking tasks: unsloth/LaTeX_OCR and unsloth/Radiology_mini .
Test Plan
- Benchmark Serving
# Serve the model
vllm serve unsloth/Qwen2-VL-2B-Instruct \
--dtype bfloat16 \
--max-model-len 4096 \
--max-num-seqs 5 \
--limit-mm-per-prompt "image=1,video=0" \
--max-seq-len-to-capture 4096 \
--mm-processor-kwargs '{"min_pixels": 784, "max_pixels": 1003520}'
# Benchmark
python3 vllm/benchmarks/benchmark_serving.py \
--backend openai-chat \
--request-rate 5 \
--max-concurrency 5 \
--model unsloth/Qwen2-VL-2B-Instruct \
--endpoint /v1/chat/completions \
--dataset-name hf \
--dataset-path unsloth/LaTeX_OCR \
--hf-split train \
--hf-output-len 256 \
--num-prompts 1000
- Benchmark throughput
python3 vllm/benchmarks/benchmark_throughput.py \
--model unsloth/Qwen2-VL-2B-Instruct \
--backend vllm-chat \
--dataset-name hf \
--dataset-path unsloth/LaTeX_OCR \
--hf-split train \
--num-prompts 1000
Test Result
- Benchmark serving
============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 201.35
Total input tokens: 8000
Total generated tokens: 73002
Request throughput (req/s): 4.97
Output token throughput (tok/s): 362.57
Total Token throughput (tok/s): 402.30
---------------Time to First Token----------------
Mean TTFT (ms): 44.91
Median TTFT (ms): 42.01
P99 TTFT (ms): 72.49
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 5.50
Median TPOT (ms): 5.40
P99 TPOT (ms): 7.38
---------------Inter-token Latency----------------
Mean ITL (ms): 5.43
Median ITL (ms): 4.69
P99 ITL (ms): 32.08
==================================================
- Benchmark throughput
Throughput: 40.87 requests/s, 12228.12 total tokens/s, 10461.48 output tokens/s
Total num prompt tokens: 43231
Total num output tokens: 256000
(Optional) Documentation Update
Updated benchmark README
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @arunmadhusud.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!
This pull request has been automatically closed due to inactivity. Please feel free to reopen if you intend to continue working on it. Thank you!