vllm
vllm copied to clipboard
[Bug]: Model architectures ['Qwen2AudioForConditionalGeneration'] are not supported for now.
Your current environment
pip install+https://github.com/vllm-project/vllm.git
Model Input Dumps
No response
🐛 Describe the bug
(VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] File "/apps1/zhangfan/anaconda3/envs/new_swift/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] output = executor(*args, **kwargs) (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] File "/apps1/zhangfan/anaconda3/envs/new_swift/lib/python3.10/site-packages/vllm/worker/worker.py", line 183, in load_model (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] self.model_runner.load_model() (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] File "/apps1/zhangfan/anaconda3/envs/new_swift/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 999, in load_model (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] self.model = get_model(model_config=self.model_config, (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] File "/apps1/zhangfan/anaconda3/envs/new_swift/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 19, in get_model (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] return loader.load_model(model_config=model_config, (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] File "/apps1/zhangfan/anaconda3/envs/new_swift/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 358, in load_model (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] model = _initialize_model(model_config, self.load_config, (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] File "/apps1/zhangfan/anaconda3/envs/new_swift/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 170, in _initialize_model (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] model_class, _ = get_model_architecture(model_config) (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] File "/apps1/zhangfan/anaconda3/envs/new_swift/lib/python3.10/site-packages/vllm/model_executor/model_loader/utils.py", line 39, in get_model_architecture (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] return ModelRegistry.resolve_model_cls(architectures) (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] File "/apps1/zhangfan/anaconda3/envs/new_swift/lib/python3.10/site-packages/vllm/model_executor/models/init.py", line 178, in resolve_model_cls (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] raise ValueError( (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226] ValueError: Model architectures ['Qwen2AudioForConditionalGeneration'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'CohereForCausalLM', 'DbrxForCausalLM', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'DeepseekV2ForCausalLM', 'ExaoneForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'Gemma2ForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'JAISLMHeadModel', 'LlamaForCausalLM', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'MiniCPMForCausalLM', 'NemotronForCausalLM', 'OlmoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PersimmonForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'PhiMoEForCausalLM', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'Qwen2VLForConditionalGeneration', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'ArcticForCausalLM', 'XverseForCausalLM', 'Phi3SmallForCausalLM', 'MedusaModel', 'EAGLEModel', 'MLPSpeculatorPreTrainedModel', 'JambaForCausalLM', 'GraniteForCausalLM', 'MistralModel', 'Blip2ForConditionalGeneration', 'ChameleonForConditionalGeneration', 'FuyuForCausalLM', 'InternVLChatModel', 'LlavaForConditionalGeneration', 'LlavaNextForConditionalGeneration', 'LlavaNextVideoForConditionalGeneration', 'MiniCPMV', 'PaliGemmaForConditionalGeneration', 'Phi3VForCausalLM', 'PixtralForConditionalGeneration', 'QWenLMHeadModel', 'UltravoxModel', 'BartModel', 'BartForConditionalGeneration'] (VllmWorkerProcess pid=544861) ERROR 09-13 18:22:53 multiproc_worker_utils.py:226]
Before submitting a new issue...
- [X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
See: https://github.com/vllm-project/vllm/issues/8394
We don’t support Qwen2Audio yet?
We don’t support Qwen2Audio yet?
Yep