vllm
vllm copied to clipboard
[Hardware][TPU] Multi-LoRA implementation for the TPU backend
This PR adds a Multi-LoRA implementation that works on the TPU backend, extending the work done in #11100.
Currently this uses pytorch operations for the Punica kernels, but I am going to put up a PR with Pallas kernels soon.
👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can do one of these:
- Add
readylabel to the PR - Enable auto-merge.
🚀
It looks like the Async Engine, Inputs, Utils, Worker Test is failing on multimodal inputs, which is WIP right now.
The TPU test seems to be failing on non lora code. Do these tests pass on main? I'm wondering if they're linked to this PR or something else
cc @lsy323 to take a pass
Switched to draft while I get refactor for the v1 implementation
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @Akshat-Tripathi.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @Akshat-Tripathi.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
Closing in favour of https://github.com/vllm-project/vllm/pull/14238