vllm
vllm copied to clipboard
Integrate the new ragged paged attention kernel with vLLM v1 on TPU
This PR integrates the new ragged paged attention kernel with vLLM v1 on TPU. In particular, this PR
- Update torch_xla pin to the latest
- Update pallas.py in v1 to use the new ragged paged attention kernel instead of the 3 separate kernels in v0.
- Combine prompt and decode steps into one single step in tpu_model_runner.py, similar to what GPU does today.
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @vanbasten23.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
@alexm-redhat @WoosukKwon , there are 2 issues currently. One is running vllm/tests/entrypoints/llm/test_accuracy.py::test_lm_eval_accuracy_v1_engine is very slow (2h, probably due to excessive compiling), which I'm investigating. Another issue is the test vllm/tests/entrypoints/llm/test_accuracy.py::test_lm_eval_accuracy_v1_engine actually fails. Any suggestion on how to find a smaller repro in order to debug?
cc @mgoin
cc @bvrockwell
hey @mgoin , it's my first PR in vLLM repo. I see "pre-commit / pre-commit (pull_request)" in the CI is red, it seems that it complains the format and mypy. For formatting, is there a linter I can use in vLLM?
Hey @vanbasten23 please install precommit using these directions https://docs.vllm.ai/en/latest/contributing/overview.html#testing
pip install -r requirements-dev.txt
pre-commit install --hook-type pre-commit --hook-type commit-msg
Then on your next commit it will apply
Hey @vanbasten23 please install precommit using these directions https://docs.vllm.ai/en/latest/contributing/overview.html#testing
pip install -r requirements-dev.txt pre-commit install --hook-type pre-commit --hook-type commit-msgThen on your next commit it will apply
Thanks. I followed it. Somehow, running pre-commit run --all-files removed https://github.com/vanbasten23/vllm/blob/58d1b2aa772deb166355423997fbf5c1b6b186a1/vllm/v1/attention/backends/pallas.py#L7 which is important even though it is not directly used. I've added it manually.
Thanks @mgoin for reviewing the PR.
Hi @mgoin , could you help merge the PR? I don't see a merge button on my side.
Thanks for the ping, yes you need committer status to merge, which I'll handle. Let me quickly chat with @alexm-redhat before merging