Jee Jee Li
Jee Jee Li
Could you please provide the detailed error information?
I think this means that [transformers-fallback](https://docs.vllm.ai/en/latest/models/supported_models.html#transformers-fallback) doesn't support these 2 features. For models integrated with vllm, we support QLoRA. BTW, Afer https://github.com/vllm-project/vllm/pull/13166 was landed, I think `transformers-fallback` can support LoRA...
Could you please provide more detailed information, such as log information and errors
There are plans to support this, but it's not a high priority. It may take 1-2 months.
> > There are plans to support this, but it's not a high priority. It may take 1-2 months. > > Are you referring to encoder-decoder support or multimodal support?...
For the `mllama` model, it currently doesn't support lora because it's an encoder-decoder multimodal model. Other models, such as `Idefics3`, support text decoder LoRA
> Qwen2VL Yeah, it should be noted that when training lora, it can only be added to the text decoder.
Minimal code snippet: ```python from vllm import LLM llm = LLM( model=YOUR_MODEL_PATH, pipeline_parallel_size=2, ) ```
See: https://github.com/vllm-project/vllm/blob/main/vllm/engine/arg_utils.py#L112
Try: ```shell vllm serve YOUR_MODEL_PATH --pipeline-parallel-size 2 ```