ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

Qwen 2.5-vl support?

Open Gusha-nye opened this issue 7 months ago • 2 comments

Hi guys, recently I tried to run Qwen2.5-vl-3b on intel gpu with inference using ipex framework, but it failed and reported the following error:

Image

Can you tell me how this should be solved, or is the Qwen2.5-vl model not currently supported?

Gusha-nye avatar May 24 '25 04:05 Gusha-nye

Hi @Gusha-nye , Qwen2.5-vl is not supported.

cyita avatar Jun 13 '25 02:06 cyita

Hi @Gusha-nye, we have supported IPEX-LLM-optimized Qwen2.5-VL models with vLLM. You could try with our docs for vLLM to get started.

xiangyuT avatar Jun 16 '25 02:06 xiangyuT

Hi @xiangyuT , could you tell how to send a video to Qwen2.5 VL as recognition?

buffliu avatar Jun 24 '25 04:06 buffliu

@buffliu You can add --allowed-local-media-path /llm/models/media on starting vllm service, and then you can send a vedio like:

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "Qwen2.5-VL-7B-Instruct",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
	    "text": "what is in the vedio?"
          },
          {
            "type": "video_url",
            "video_url": {
              "url": "file:/llm/models/media/test.mp4"
            }
          }
        ]
      }
    ],
    "max_tokens": 512
  }'

hzjane avatar Jun 24 '25 05:06 hzjane

@hzjane,thank you so much for your help.

buffliu avatar Jun 24 '25 06:06 buffliu

Issue resolved, close.

lalalapotter avatar Jun 25 '25 01:06 lalalapotter