[Bug fix] ROCm FlashAttention: add missing `full_scales` argument to Triton wrapper
The recent change in PR #15734 adds a full_scales tensor to the call site in rocm_flash_attn.py. However, _attention.forward in attention/ops/triton_flash_attention.py still accepts only 12 positional arguments. This mismatch causes:
TypeError: _attention.forward() takes from 9 to 12 positional arguments but 13 were given
cc: @houseroad @luccafong @rasmith
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @zhewenl.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
rebase to trigger test?
fixed by https://github.com/vllm-project/vllm/commit/8e4b351a0c9e414b0c56c32cbdef51a21d1ea1be?fbclid=IwZXh0bgNhZW0CMTEAYnJpZBExT0JVeTF6emF0SWRaNUU0WAEegqoGNk7T1CxCHX02o88944Fc9floNtJCcbPwnqQXqkmhllh4IHT8cXbdL7w_aem_Jn0FOH8jEbpGlaw77nBNkw
Yes, #15734 was supposed to be merged after #12591. It got merged first so attention was broken for a second