vllm
vllm copied to clipboard
[Speculative decoding][Re-take] Enable TP>1 speculative decoding
Fix tests in #4808:
- Bypass broadcasting when torch.distributed is not initialized (TP=1).
- Use 2-GPU runner to run the integration test with TP=2.
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.
Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
- We adhere to Google Python style guide and Google C++ style guide.
- Pass all linter checks. Please use
format.sh
to format your code. - The code need to be well-documented to ensure future contributors can easily understand the code.
- Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
- Please add documentation to
docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.
Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required
and might not go through the PR.
What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
- After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
- After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
- After the review, the reviewer will put an
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR. - Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.
Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!
cc @LiuXiaoxuanPKU @youkaichao @cadedaniel
Hi @Alexei-V-Ivanov-AMD, the failed test on AMD has the following error. Could you help point some directions for fixing? Thanks!
def test_target_model_tp_gt_1(baseline_llm_generator, test_llm_generator,
--
| batch_size: int, output_len: int):
| """Verify greedy equality when tensor parallelism is used.
| """
| > run_greedy_equality_correctness_test(baseline_llm_generator,
| test_llm_generator,
| batch_size,
| max_output_len=output_len,
| force_output_len=True)
|
| spec_decode/e2e/test_integration_dist.py:57:
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
| spec_decode/e2e/conftest.py:264: in run_greedy_equality_correctness_test
| spec_batch_tokens, spec_batch_token_ids = get_output_from_llm_generator(
| spec_decode/e2e/conftest.py:204: in get_output_from_llm_generator
| for llm in llm_generator():
| spec_decode/e2e/conftest.py:181: in generator_outer
| for llm in generator_inner():
| spec_decode/e2e/conftest.py:161: in generator_inner
| wait_for_gpu_memory_to_clear(
| spec_decode/e2e/conftest.py:291: in wait_for_gpu_memory_to_clear
| nvmlInit()
| /opt/conda/envs/py_3.9/lib/python3.9/site-packages/pynvml.py:2132: in nvmlInit
| nvmlInitWithFlags(0)
| /opt/conda/envs/py_3.9/lib/python3.9/site-packages/pynvml.py:2122: in nvmlInitWithFlags
| _nvmlCheckReturn(ret)
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
| ret = 999
|
| def _nvmlCheckReturn(ret):
| if (ret != NVML_SUCCESS):
| > raise NVMLError(ret)
| E pynvml.NVMLError_Unknown: Unknown Error
|
| /opt/conda/envs/py_3.9/lib/python3.9/site-packages/pynvml.py:919: NVMLError_Unknown
Hello @comaniac ,
I assume you're talking about the following build: https://buildkite.com/vllm/ci/builds/7430#018f7e0c-6001-40de-aeb6-29ce70a898b1
If you look earlier in the log the node complaints about inability to access the "gated repo" (https://buildkite.com/vllm/ci/builds/7430#018f7e0c-6001-40de-aeb6-29ce70a898b1/11010-11264), - usually a llama-family model.
The CI's config is correct, but sometimes the HF authentication still doesn't succeed.
We have seen this kind of error on multiple occasions in different tests as well, not only on the AMD side.
The best current remedy is to repeat the test, which I've requested just now for this particular build.
Ah I see. Thanks for helping out! Actually I've re-run this test twice, but let's see if we are lucky this time.
Update for @comaniac
Now that I'm looking into your first attempt at making the same test in your build, I can see that the error you're referring to is the only one in the test (https://buildkite.com/vllm/ci/builds/7430#018f7d52-18a7-44e6-a9e6-44f37bbe86e2/13304-14196)
We shall look into this with greater attention.
The test you're trying to accomplish was working just a few hours ago. So, there it not much of code for us look through. Let me discuss this matter internally and come back to you on it.
Appreciate it. This is a new test added in this PR. It mainly enables TP>1 for target model in speculative decoding by broadcasting required metadata. You're welcome to review this PR and let me know if there's anything wrong. Thanks.
Update #2 @comaniac
FYI the latest successful PR passing that test is https://buildkite.com/vllm/ci/builds/7365
They were doing the same test "AMD: Distributed Tests" on the same machine as your original run.
But they didn't have the test that was failing in your case, which is
Running 2 items in this shard: tests/spec_decode/e2e/test_integration_dist.py::test_target_model_tp_gt_1[1-32-2-test_llm_kwargs0-baseline_llm_kwargs0-per_test_common_llm_kwargs0-common_llm_kwargs0], tests/spec_decode/e2e/test_integration_dist.py::test_target_model_tp_gt_1[1-32-2-test_llm_kwargs1-baseline_llm_kwargs0-per_test_common_llm_kwargs0-common_llm_kwargs0]
IMHO The problem with your attempt at "AMD: Distributed Tests" is here:
https://github.com/comaniac/vllm/blob/9ddeab3410e7958998a3afa3e675126439fea008/.buildkite/test-pipeline.yaml#L45C3-L45C58
w.r.t. the common standard composition of the "AMD: Distributed Tests" there is an added test
" - pytest -v -s spec_decode/e2e/test_integration_dist.py "
which does not belong there.
You're welcome to review this PR and let me know if there's anything wrong.
OK, I'll have a look at it. Thank you!
Update #3 @comaniac
To succeed with the "AMD: Distributed Tests" please remove the line
https://github.com/comaniac/vllm/blob/9ddeab3410e7958998a3afa3e675126439fea008/.buildkite/test-pipeline.yaml#L45C3-L45C58
it doesn't belong there.
Update #4 @comaniac
If, however, you're interested in learning why this NVML error pops up in your speculative decoding test, - we can have a look at it as well.
We'll just need more time to investigate.
Thanks for the advice, but this line is added intentionally to test speculative decoding on multiple GPUs and it passed on NVIDIA GPUs, so we should not remove it. For short term I can skip this test on AMD GPUs and add a warning.
Great! Thanks for the PR! I have much interest in vllm and this speculative decoding feature. Actually, I didn't know this was already in progress, and I was testing the same feature after I finished implementing it by myself😢 I should communicate more from now on!
And I have a question. @comaniac @cadedaniel Does this PR support different degree of parallelism for draft and target worker (ex. tp=1 for draft, tp=2 for target)? I experienced slower speed when increasing tp of draft workers as large as for target workers in my branch.
After a short experiment with this PR code, I'm getting almost the same result with my local branch, based on v0.4.2. I guess that it's due to the comm overhead from large tp for a small draft model.
@comaniac @cadedaniel Do you have a plan to implement this feature (different tp for draft/target) in near future?
Tell me if you have the plan. Or maybe I can make a PR. It'd appreciate if you give me the response about your plan. Thanks in advance!
Setting: benchmark_latency.py (batch_size=8, lookahead_slots=8), A100*4
target | draft | tp | latency |
---|---|---|---|
opt-30b | opt-125m | 1 | 4.62s |
opt-30b | opt-125m | 2 | 6.10s |
opt-30b | opt-125m | 4 | 6.23s |
-- | -- | -- | -- |
opt-30b | None | 1 | 5.63s |
opt-30b | None | 2 | 3.60s |
opt-30b | None | 4 | 2.42s |
-- | -- | -- | -- |
opt-125m | None | 1 | 0.35s |
opt-125m | None | 2 | 1.30s |
opt-125m | None | 4 | 1.39s |
Thanks for the help and benchmark. We do have a plan to support draft model with a different TP size. See #4632