vllm
vllm copied to clipboard
[Kernel] Triton Paged Attn Decode Kernel
This adds Triton Language based Paged Attention Decode Kernel
👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can do one of these:
- Add
readylabel to the PR - Enable auto-merge.
🚀
@WoosukKwon @tlrmchlsmth Please review and provide feedback. Thanks!
Hi @rahulbatra85 Thanks for the PR!
Could you please provide performance benchmarks? Particularly, the perf benchmark in the Llama setting (head_size=128, num_kv_heads=8, etc.) would be useful.
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @rahulbatra85.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!
This pull request has been automatically closed due to inactivity. Please feel free to reopen if you intend to continue working on it. Thank you!