Qwen 2.5-vl support?
Hi guys, recently I tried to run Qwen2.5-vl-3b on intel gpu with inference using ipex framework, but it failed and reported the following error:
Can you tell me how this should be solved, or is the Qwen2.5-vl model not currently supported?
Hi @Gusha-nye , Qwen2.5-vl is not supported.
Hi @Gusha-nye, we have supported IPEX-LLM-optimized Qwen2.5-VL models with vLLM. You could try with our docs for vLLM to get started.
Hi @xiangyuT , could you tell how to send a video to Qwen2.5 VL as recognition?
@buffliu You can add --allowed-local-media-path /llm/models/media on starting vllm service, and then you can send a vedio like:
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen2.5-VL-7B-Instruct",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "what is in the vedio?"
},
{
"type": "video_url",
"video_url": {
"url": "file:/llm/models/media/test.mp4"
}
}
]
}
],
"max_tokens": 512
}'
@hzjane,thank you so much for your help.
Issue resolved, close.