vllm
vllm copied to clipboard
[Misc] Reduce LoRA-related static variable
Motivation
Remove LoRA-related static variable supported_lora_modules, which not only makes our model implementation cleaner but also enables smoother LoRA support
Work
- [ ] Delete all models
supported_lora_modules - [ ] Add unit test
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @jeejeelee.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
@DarkLight1337 Do you know what's causing the current CI failures?
There were some issues in HF's opt repo yesterday, which should have been fixed. I think re-run these CIs should be just fine?