ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

can not run Qwen2.5 omni with latest transformer

Open ZhangWei125521 opened this issue 5 months ago • 5 comments

**I can run Qwen2.5 omni with pip install git+https://github.com/huggingface/transformers@f742a644ca32e65758c3adb36225aef1731bd2a8, and I can run it successful. But I want to use the multi batch function, and the https://github.com/huggingface/transformers@f742a644ca32e65758c3adb36225aef1731bd2a8 does not contain thin funtion, So I have to update the transformer version to 4.52.3 and later.

And It occur error like this:** Traceback (most recent call last): File "C:\Work\Omni\resource\cptest\test_7B\quan.py", line 77, in text_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\transformers\models\qwen2_5_omni\modeling_qwen2_5_omni.py", line 4510, in generate thinker_result = self.thinker.generate(input_ids=input_ids, **thinker_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\ipex_llm\transformers\pipeline_parallel.py", line 283, in generate return original_generate(self, ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\transformers\generation\utils.py", line 2597, in generate result = self._sample( ^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\transformers\generation\utils.py", line 3557, in _sample outputs = self(**model_inputs, return_dict=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\transformers\models\qwen2_5_omni\modeling_qwen2_5_omni.py", line 2366, in forward audio_features = self.get_audio_features( ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\transformers\models\qwen2_5_omni\modeling_qwen2_5_omni.py", line 2250, in get_audio_features audio_outputs = self.audio_tower( ^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\transformers\models\qwen2_5_omni\modeling_qwen2_5_omni.py", line 911, in forward layer_outputs = encoder_layer( ^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\envs\new_qwen\Lib\site-packages\transformers\models\qwen2_5_omni\modeling_qwen2_5_omni.py", line 776, in forward hidden_states = residual + hidden_states ~~~~~~~~~^~~~~~~~~~~~~~~ TypeError: unsupported operand type(s) for +: 'Tensor' and 'tuple'

And my inference code like this:

import soundfile as sf from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor from qwen_omni_utils import process_mm_info from ipex_llm import optimize_model

model_path = "C:\Work\Omni\resource\Qwen2.5-Omni-3B" model = Qwen2_5OmniForConditionalGeneration.from_pretrained(model_path, enable_audio_output=False)

model = optimize_model(model, low_bit='sym_int4', optimize_llm=True, modules_to_not_convert=["audio_tower", "visual", "token2wav"]) model = model.half().to('xpu')

processor = Qwen2_5OmniProcessor.from_pretrained("C:\Work\Omni\resource\Qwen2.5-Omni-3B")

conversation1 = [ { "role": "system", "content": [{"type": "text", "text": "You are Qwen..."}], }, { "role": "user", "content": [ {"type": "text", "text": "describe the audio"}, {"type": "audio", "audio": "C:\Work\Omni\resource\cptest\test_7B\test20.wav"}, ], }, ]

conversation2 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "audio", "audio": "C:\Work\Omni\resource\cptest\test_7B\test48.wav"}, ] } ]

conversation = [conversation1, conversation2]

USE_AUDIO_IN_VIDEO = True text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False) audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)

inputs = processor( text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO )

inputs = inputs.to(model.device).to(model.dtype)

print("\n===== 模型输入1 =====") print(f"input_ids 形状: {inputs.input_ids.shape}") if hasattr(inputs, "audio_features"): print(f"audio_features 形状: {inputs.audio_features.shape}") else: print("audio_features 不存在")

text_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False) text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) print("========================") print(text) print("========================")

ZhangWei125521 avatar Jul 18 '25 16:07 ZhangWei125521

I found that if I set optimize_llm=False,the inference will success, I dont know why this happen

ZhangWei125521 avatar Jul 19 '25 10:07 ZhangWei125521

I found that if I set optimize_llm=False,the inference will success, I dont know why this happen

Hi @ZhangWei125521 If optimize_llm=False is fine, that means we have a transformer version mismatch. According to your info, 4.52.3 is not well supported.

Can you try transformers=4.37 (default version recommended by IPEX-LLM)?

qiyuangong avatar Jul 21 '25 01:07 qiyuangong

I found that if I set optimize_llm=False,the inference will success, I dont know why this happen

Hi @ZhangWei125521 If optimize_llm=False is fine, that means we have a transformer version mismatch. According to your info, 4.52.3 is not well supported.

Can you try transformers=4.37 (default version recommended by IPEX-LLM)?

Hi @qiyuangong thanks for you reply, but I aim at to Qwen2.5 omni multi batch inference with IPEX-LLM, and the transformers=4.37 not support Qwen2.5 omni.

ZhangWei125521 avatar Jul 21 '25 02:07 ZhangWei125521

I found that if I set optimize_llm=False,the inference will success, I dont know why this happen

Hi @ZhangWei125521 If optimize_llm=False is fine, that means we have a transformer version mismatch. According to your info, 4.52.3 is not well supported. Can you try transformers=4.37 (default version recommended by IPEX-LLM)?

Hi @qiyuangong thanks for you reply, but I aim at to Qwen2.5 omni multi batch inference with IPEX-LLM, and the transformers=4.37 not support Qwen2.5 omni.

OK. In that case, you can only use optimize_llm=False with the higher version transformers (multi-batch feature). That means IPEX-LLM will only optimize the linear layer for your model.

qiyuangong avatar Jul 23 '25 02:07 qiyuangong

Hi @ZhangWei125521 ,

We have supported Qwen2.5 omni models with llm-scaler-vllm recently. You can now test this functionality using the Docker image intel/llm-scaler-vllm:0.2.0-b2.

For implementation details and guidance, please refer to our Omni Model documentation.

xiangyuT avatar Jul 28 '25 02:07 xiangyuT