vllm icon indicating copy to clipboard operation
vllm copied to clipboard

Integrate the new ragged paged attention kernel with vLLM v1 on TPU

Open vanbasten23 opened this issue 9 months ago • 2 comments

This PR integrates the new ragged paged attention kernel with vLLM v1 on TPU. In particular, this PR

  • Update torch_xla pin to the latest
  • Update pallas.py in v1 to use the new ragged paged attention kernel instead of the 3 separate kernels in v0.
  • Combine prompt and decode steps into one single step in tpu_model_runner.py, similar to what GPU does today.

vanbasten23 avatar Feb 17 '25 06:02 vanbasten23

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

github-actions[bot] avatar Feb 17 '25 06:02 github-actions[bot]

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @vanbasten23.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar Feb 18 '25 06:02 mergify[bot]

@alexm-redhat @WoosukKwon , there are 2 issues currently. One is running vllm/tests/entrypoints/llm/test_accuracy.py::test_lm_eval_accuracy_v1_engine is very slow (2h, probably due to excessive compiling), which I'm investigating. Another issue is the test vllm/tests/entrypoints/llm/test_accuracy.py::test_lm_eval_accuracy_v1_engine actually fails. Any suggestion on how to find a smaller repro in order to debug?

vanbasten23 avatar Feb 21 '25 19:02 vanbasten23

cc @mgoin

vanbasten23 avatar Feb 27 '25 19:02 vanbasten23

cc @bvrockwell

vanbasten23 avatar Feb 27 '25 19:02 vanbasten23

hey @mgoin , it's my first PR in vLLM repo. I see "pre-commit / pre-commit (pull_request)" in the CI is red, it seems that it complains the format and mypy. For formatting, is there a linter I can use in vLLM?

vanbasten23 avatar Feb 27 '25 20:02 vanbasten23

Hey @vanbasten23 please install precommit using these directions https://docs.vllm.ai/en/latest/contributing/overview.html#testing

pip install -r requirements-dev.txt
pre-commit install --hook-type pre-commit --hook-type commit-msg

Then on your next commit it will apply

mgoin avatar Feb 27 '25 20:02 mgoin

Hey @vanbasten23 please install precommit using these directions https://docs.vllm.ai/en/latest/contributing/overview.html#testing

pip install -r requirements-dev.txt
pre-commit install --hook-type pre-commit --hook-type commit-msg

Then on your next commit it will apply

Thanks. I followed it. Somehow, running pre-commit run --all-files removed https://github.com/vanbasten23/vllm/blob/58d1b2aa772deb166355423997fbf5c1b6b186a1/vllm/v1/attention/backends/pallas.py#L7 which is important even though it is not directly used. I've added it manually.

vanbasten23 avatar Feb 27 '25 21:02 vanbasten23

Thanks @mgoin for reviewing the PR.

vanbasten23 avatar Feb 28 '25 05:02 vanbasten23

Hi @mgoin , could you help merge the PR? I don't see a merge button on my side.

vanbasten23 avatar Feb 28 '25 15:02 vanbasten23

Thanks for the ping, yes you need committer status to merge, which I'll handle. Let me quickly chat with @alexm-redhat before merging

mgoin avatar Feb 28 '25 15:02 mgoin