vllm
vllm copied to clipboard
[Core] Add Support for Default Modality Specific LoRAs [generate / chat completions]
Purpose
Fixes https://github.com/vllm-project/vllm/issues/16994
This PR adds support for default modality specific Lora adapters - this is useful for models like granite speech and phi4mm, where it's bundled with its own LoRA, but currently needs the user to remember to pass the LoRARequest with every offline call containing audio, or send the request to the LoRA model name when running online.
Note that generally this should be applicable for most request types that can take LoRAs, but the initial implementation only adds it for .generate for offline and chat_completions for online.
Test Plan
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
from vllm.assets.audio import AudioAsset
model_id = "ibm-granite/granite-speech-3.3-2b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
def get_prompt(question: str, has_audio: bool):
"""Build the input prompt to send to vLLM."""
if has_audio:
question = f"<|audio|>{question}"
chat = [
{
"role": "user",
"content": question
}
]
return tokenizer.apply_chat_template(chat, tokenize=False)
model = LLM(
model=model_id,
enable_lora=True,
max_lora_rank=64,
max_model_len=2048,
limit_mm_per_prompt={"audio": 1},
# Will always pass a `LoRARequest` with the `model_id`
# whenever audio is contained in the request data.
default_mm_loras = {"audio": model_id},
enforce_eager=True,
)
question = "can you transcribe the speech into a written format?"
prompt_with_audio = get_prompt(
question=question,
has_audio=True,
)
audio = AudioAsset("mary_had_lamb").audio_and_sample_rate
inputs = {
"prompt": prompt_with_audio,
"multi_modal_data": {
"audio": audio,
}
}
outputs = model.generate(
inputs,
sampling_params=SamplingParams(
temperature=0.2,
max_tokens=64,
),
)
print(f"Audio Example - Question: {question}")
for output in outputs:
print("------")
print(output.outputs[0].text)
In the case of the server, you can pass the mapping from the command line as a JSON object.
vllm serve ibm-granite/granite-speech-3.3-2b \
--max-model-len 2048 \
--enable-lora \
--default_mm_loras '{"audio":"ibm-granite/granite-speech-3.3-2b"}' \
--max-lora-rank 64
Test Result
Running the audio example correctly applies the lora and produces a transcription.
CC @jeejeelee @DarkLight1337
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @alex-jw-brooks.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
Hey @jeejeelee, thank you for the review. This PR is ready for another look when you have a moment!
Hi @jeejeelee, just wanted to follow-up here to see if you have any additional thoughts on this PR? Adding this feature would be a big help for us 😄
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @alex-jw-brooks.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
Thanks @jeejeelee! Just did 😄
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @alex-jw-brooks.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork