llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Feature Request: Qwen 2.5 VL

Open bold84 opened this issue 10 months ago • 72 comments

Prerequisites

  • [x] I am running the latest code. Mention the version if possible as well.
  • [x] I carefully followed the README.md.
  • [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [x] I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Is anybody implementing this?

If not, I may give it a go. But it will take some time as I am new to the source side of llama.cpp/ggml.

Motivation

Well, it's not currently working. :-)

Possible Implementation

Based on the existing Qwen 2 VL implementation.

bold84 avatar Jan 29 '25 11:01 bold84

I'm currently looking into Transformers' Qwen2.5VL implementation and waiting for the paper to drop so I can better assess the differences between Qwen2VL and Qwen2.5VL. 👀

HimariO avatar Jan 29 '25 13:01 HimariO

cool

3unnycheung avatar Jan 29 '25 14:01 3unnycheung

I support this!

samkoesnadi avatar Jan 29 '25 18:01 samkoesnadi

Our world definitely needs this!

Shyryp avatar Feb 02 '25 14:02 Shyryp

Any progress on this? Who added support for Qwen 2 VL?

peter-ch avatar Feb 13 '25 13:02 peter-ch

qwen2.5-vl report is up! https://huggingface.co/papers/2502.13923

edit: official codebase here: https://github.com/QwenLM/Qwen2.5-VL

pszemraj avatar Feb 20 '25 22:02 pszemraj

I can start working on this if no one else is already.

vladislavdonchev avatar Feb 22 '25 17:02 vladislavdonchev

OK then!

First order of business would be to build the GGUF file(s). Seems there is an issue with that and the latest official Transformers:

python convert_hf_to_gguf.py .\build\bin\Release\Qwen2.5-VL-7B-Instruct\
INFO:hf-to-gguf:Loading model: Qwen2.5-VL-7B-Instruct
ERROR:hf-to-gguf:Model Qwen2_5_VLForConditionalGeneration is not supported

This is pretty hot: https://github.com/huggingface/transformers/issues/36292 https://github.com/QwenLM/Qwen2.5-VL/issues/723

Appears a temporary workaround would be to use the old Qwen2 templates. People are reporting this works, so I'll post an update in a bit.

vladislavdonchev avatar Feb 22 '25 21:02 vladislavdonchev

Right, so this one is a bit of a rabbit hole...

I. Reverting the Qwen2.5 config files to:

"processor_class": "Qwen2VLProcessor"

and

  "architectures": [
    "Qwen2VLForConditionalGeneration"
  ]

Produces a (seemingly) working model! We've started testing and quantizing it here: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF/tree/main

Image

II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: https://github.com/ggml-org/llama.cpp/pull/10896

For more information refer to: https://github.com/ggml-org/llama.cpp/issues/11322

The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: https://github.com/ggml-org/llama.cpp/pull/11902

So, it is now up to us to prove that everything is working properly.

I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on.

vladislavdonchev avatar Feb 22 '25 22:02 vladislavdonchev

UPDATE: A few 4-bit quants have been uploaded, including two that support online auto-repacking.

The latest main looks stable with Vulkan CLIP and any model thrown at it so far. Some preliminary insights:

  • 1200x1200 is the maximum you can encode with 16GB of VRAM. clip.cpp does not seem to support multi-GPU Vulkan yet.
  • A 4060Ti-class GPU delivers 20-30 t/s with the Q8_0 and double that on Q4 @ 16-32K context.
  • Batching (multiple images) in a single cli call seems to be working fine: llama-qwen2vl-cli--ctx-size 16000 -n 16000 -m ~/gguf/Qwen2.5-VL-7B-Instruct-Q4_0.gguf --mmproj ~/gguf/mmproj-Qwen2.5-VL-7B-Instruct-f32.gguf --n_gpu_layers 9999 -p "Describe the image in detail. Extract all textual information from it. Output as detailed JSON." -p "Analyze the image." --image ~/Pictures/test_small.png --image ~/Pictures/test_small.png

Output quality looks very promising! We'll release all of the benchmark code when ready, so the process can be streamlined for other models.

vladislavdonchev avatar Feb 23 '25 11:02 vladislavdonchev

Hi! Excelent news, thank you very much for this!

I was able to run the model by using code from git main on a 4 x Radeon 7900 XTX 24 GB workstation, but using Clip on CPU. I tried to enable Vulkan acceleration for Clip by uncommenting the lines on clip.cpp under examples, but in that case I get OOM. I tried this with models FP16, Q4K_M and IQ4_XS. Specifying the cli to just use one Vulkan device does not help on the OOM / Clip GPU issue either.

hvico avatar Feb 24 '25 02:02 hvico

Hi! Excelent news, thank you very much for this!

I was able to run the model by using code from git main on a 4 x Radeon 7900 XTX 24 GB workstation, but using Clip on CPU. I tried to enable Vulkan acceleration for Clip by uncommenting the lines on clip.cpp under examples, but in that case I get OOM. I tried this with models FP16, Q4K_M and IQ4_XS. Specifying the cli to just use one Vulkan device does not help on the OOM / Clip GPU issue either.

Hi, could you please confirm what the resolution of your input images is?

EDIT: As per Qwen2.5 docs: min_pixels = 256x28x28 max_pixels = 1280x28x28

A RTFM moment for me...

vladislavdonchev avatar Feb 24 '25 05:02 vladislavdonchev

Hi! Excelent news, thank you very much for this! I was able to run the model by using code from git main on a 4 x Radeon 7900 XTX 24 GB workstation, but using Clip on CPU. I tried to enable Vulkan acceleration for Clip by uncommenting the lines on clip.cpp under examples, but in that case I get OOM. I tried this with models FP16, Q4K_M and IQ4_XS. Specifying the cli to just use one Vulkan device does not help on the OOM / Clip GPU issue either.

Hi, could you please confirm what the resolution of your input images is? With 24G VRAM, you can expect an OOM with images >1400x1400 pixels, so you need to make sure the files are pre-processed correctly.

Thanks.

My image was 1475x1062. I was able to run inference successfuly using a 1077x671 sample, without OOM. Would it be possible to run Clip and VL on separate GPUs? Thanks again.

hvico avatar Feb 24 '25 12:02 hvico

Right, so this one is a bit of a rabbit hole...

I. Reverting the Qwen2.5 config files to:

"processor_class": "Qwen2VLProcessor"

and

  "architectures": [
    "Qwen2VLForConditionalGeneration"
  ]

Produces a (seemingly) working model! We've started testing and quantizing it here: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF/tree/main

Image

II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: #10896

For more information refer to: #11322

The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: #11902

So, it is now up to us to prove that everything is working properly.

I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on.

Thank you very much for your research and sharing! I would like to ask how to get mmproj from Qwen2.5-VL model? The original qwen2_vl_surgery.py used for Qwen2-VL doesn't seem to work, could you share your method? Thank you very much!

zrrraa avatar Feb 25 '25 13:02 zrrraa

Right, so this one is a bit of a rabbit hole... I. Reverting the Qwen2.5 config files to: "processor_class": "Qwen2VLProcessor" and

  "architectures": [
    "Qwen2VLForConditionalGeneration"
  ]

Produces a (seemingly) working model! We've started testing and quantizing it here: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF/tree/main Image II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: #10896 For more information refer to: #11322 The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: #11902 So, it is now up to us to prove that everything is working properly. I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on.

Thank you very much for your research and sharing! I would like to ask how to get mmproj from Qwen2.5-VL model? The original qwen2_vl_surgery.py used for Qwen2-VL doesn't seem to work, could you share your method? Thank you very much!

Get it from our HF: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF

vladislavdonchev avatar Feb 25 '25 16:02 vladislavdonchev

Thank you for the effort, a lot of people really need this.

Any updates on the progress? Will this still take a few days? or is it more like a few weeks or months?

Thanks a lot again, we appreciate you guys a lot!.

ChmHsm avatar Feb 27 '25 09:02 ChmHsm

@vladislavdonchev Great work! Have you done the 3B version? I can also do it myself if you provide the conversion script :)

samkoesnadi avatar Feb 27 '25 14:02 samkoesnadi

@vladislavdonchev Great work! Have you done the 3B version? I can also do it myself if you provide the conversion script :)

Working on it as we speak, along with a quantization tool:

Image

https://github.com/Independent-AI-Labs/local-super-agents/tree/feat/additional-output-formats/quantbench

vladislavdonchev avatar Feb 27 '25 14:02 vladislavdonchev

UPDATE:

Opened a draft PR here: https://github.com/ggml-org/llama.cpp/pull/12119

Long story short, I'll need some help debugging the vision models and llama-qwen2vl-cli as we're unable to produce anything reliably.

In addition, this still isn't resolved: https://github.com/ggml-org/llama.cpp/issues/11322

I've also asked the Qwen folks for help: https://github.com/QwenLM/Qwen2.5-VL/issues/869

vladislavdonchev avatar Feb 28 '25 22:02 vladislavdonchev

Thanks @vladislavdonchev for the effort and the update.

I took a look at the issue you opened with the qwen team, is it only affecting the 3B model? Can we expect at least progress to continue with 7b?

Thank you!

ChmHsm avatar Feb 28 '25 22:02 ChmHsm

Thanks @vladislavdonchev for the effort and the update.

I took a look at the issue you opened with the qwen team, is it only affecting the 3B model? Can we expect at least progress to continue with 7b?

Thank you!

Unfortunately, we're unable to reliably produce a working vision model from either 7B or 3B. I am not sure how the one in the repo was exported, but it seems to be working, so it's either some weird coincidence or a mistake. I've verified the LM part, including in quants and it also appears to match what you'd expect from Qwen2.5 (parameters in .gguf seem correct, responses are OK).

vladislavdonchev avatar Feb 28 '25 23:02 vladislavdonchev

Right, so this one is a bit of a rabbit hole...

I. Reverting the Qwen2.5 config files to:

"processor_class": "Qwen2VLProcessor"

and

  "architectures": [
    "Qwen2VLForConditionalGeneration"
  ]

Produces a (seemingly) working model! We've started testing and quantizing it here: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF/tree/main

Image

II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: #10896

For more information refer to: #11322

The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: #11902

So, it is now up to us to prove that everything is working properly.

I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on.

Right, so this one is a bit of a rabbit hole...

I. Reverting the Qwen2.5 config files to:

"processor_class": "Qwen2VLProcessor"

and

  "architectures": [
    "Qwen2VLForConditionalGeneration"
  ]

Produces a (seemingly) working model! We've started testing and quantizing it here: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF/tree/main

Image

II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: #10896

For more information refer to: #11322

The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: #11902

So, it is now up to us to prove that everything is working properly.

I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on.

I am getting the following error while trying to use Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf on Apple Silicon:

./llama-qwen2vl-cli -m "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --mmproj "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --n_gpu_layers 0 --image "wilma-7_oval.jpg" --image "wilma-7_oval.jpg" -p "Describe the image."

key general.description not found in file
libc++abi: terminating due to uncaught exception of type std::runtime_error: Missing required key: general.description
zsh: abort      ./llama-qwen2vl-cli -m  --mmproj  --n_gpu_layers 0 --image  --image  -p

Could somebody please help out?

David33706 avatar Mar 01 '25 17:03 David33706

Right, so this one is a bit of a rabbit hole... I. Reverting the Qwen2.5 config files to: "processor_class": "Qwen2VLProcessor" and

  "architectures": [
    "Qwen2VLForConditionalGeneration"
  ]

Produces a (seemingly) working model! We've started testing and quantizing it here: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF/tree/main Image II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: #10896 For more information refer to: #11322 The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: #11902 So, it is now up to us to prove that everything is working properly. I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on.

Right, so this one is a bit of a rabbit hole... I. Reverting the Qwen2.5 config files to: "processor_class": "Qwen2VLProcessor" and

  "architectures": [
    "Qwen2VLForConditionalGeneration"
  ]

Produces a (seemingly) working model! We've started testing and quantizing it here: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF/tree/main Image II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: #10896 For more information refer to: #11322 The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: #11902 So, it is now up to us to prove that everything is working properly. I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on.

I am getting the following error while trying to use Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf on Apple Silicon:

./llama-qwen2vl-cli -m "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --mmproj "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --n_gpu_layers 0 --image "wilma-7_oval.jpg" --image "wilma-7_oval.jpg" -p "Describe the image."

key general.description not found in file
libc++abi: terminating due to uncaught exception of type std::runtime_error: Missing required key: general.description
zsh: abort      ./llama-qwen2vl-cli -m  --mmproj  --n_gpu_layers 0 --image  --image  -p

Could somebody please help out?

Did you figure this out?

tomjpalamattam avatar Mar 02 '25 23:03 tomjpalamattam

Right, so this one is a bit of a rabbit hole... I. Reverting the Qwen2.5 config files to: "processor_class": "Qwen2VLProcessor" and

  "architectures": [
    "Qwen2VLForConditionalGeneration"
  ]

Produces a (seemingly) working model! We've started testing and quantizing it here: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF/tree/main Image II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: #10896 For more information refer to: #11322 The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: #11902 So, it is now up to us to prove that everything is working properly. I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on.

Right, so this one is a bit of a rabbit hole... I. Reverting the Qwen2.5 config files to: "processor_class": "Qwen2VLProcessor" and

  "architectures": [
    "Qwen2VLForConditionalGeneration"
  ]

Produces a (seemingly) working model! We've started testing and quantizing it here: https://huggingface.co/IAILabs/Qwen2.5-VL-7b-Instruct-GGUF/tree/main Image II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: #10896 For more information refer to: #11322 The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: #11902 So, it is now up to us to prove that everything is working properly. I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on.

I am getting the following error while trying to use Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf on Apple Silicon: ./llama-qwen2vl-cli -m "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --mmproj "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --n_gpu_layers 0 --image "wilma-7_oval.jpg" --image "wilma-7_oval.jpg" -p "Describe the image."

key general.description not found in file
libc++abi: terminating due to uncaught exception of type std::runtime_error: Missing required key: general.description
zsh: abort      ./llama-qwen2vl-cli -m  --mmproj  --n_gpu_layers 0 --image  --image  -p

Could somebody please help out?

Did you figure this out?

Nope

David33706 avatar Mar 03 '25 10:03 David33706

Please stop spamming this thread. Qwen2.5 is still a WIP!

Regarding the issue above: ./llama-qwen2vl-cli -m "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --mmproj "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --n_gpu_layers 0 --image "wilma-7_oval.jpg" --image "wilma-7_oval.jpg" -p "Describe the image." You cannot use the Language Model as a Vision Model (mmproj - in your command you are specifying the same thing twice).

Please wait until the implementation has been finalized.

vladislavdonchev avatar Mar 03 '25 10:03 vladislavdonchev

Please stop spamming this thread. Qwen2.5 is still a WIP!

Regarding the issue above: ./llama-qwen2vl-cli -m "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --mmproj "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --n_gpu_layers 0 --image "wilma-7_oval.jpg" --image "wilma-7_oval.jpg" -p "Describe the image." You cannot use the Language Model as a Vision Model (mmproj - in your command you are specifying the same thing twice).

Please wait until the implementation has been finalized. Most up-to-date news here: https://huggingface.co/IAILabs/Qwen2.5-VL-7B-Instruct-GGUF-WIP

hmm fyi: 404 Sorry, we can't find the page you are looking for.

sinkingsugar avatar Mar 07 '25 00:03 sinkingsugar

Please stop spamming this thread. Qwen2.5 is still a WIP! Regarding the issue above: ./llama-qwen2vl-cli -m "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --mmproj "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" --n_gpu_layers 0 --image "wilma-7_oval.jpg" --image "wilma-7_oval.jpg" -p "Describe the image." You cannot use the Language Model as a Vision Model (mmproj - in your command you are specifying the same thing twice). Please wait until the implementation has been finalized. Most up-to-date news here: https://huggingface.co/IAILabs/Qwen2.5-VL-7B-Instruct-GGUF-WIP

hmm fyi: 404 Sorry, we can't find the page you are looking for.

I've temporarily disabled the page, as too many people are trying to run the models with the incorrect versions of llama.cpp

There will be an update soon.

vladislavdonchev avatar Mar 07 '25 09:03 vladislavdonchev

I am available to help with testing - anything you need.

ehartford avatar Mar 12 '25 18:03 ehartford

any news?

euberdeveloper avatar Mar 13 '25 13:03 euberdeveloper

I've just completed the first working implementation of Qwen2.5VL on top of my previous Qwen2VL work, incorporating new components such as window attention and GLU MLP in the vision encoder.

Before I refine the code and request a PR review, anyone interested in testing it and providing feedback can find the latest version of llama.cpp Qwen2.5VL here.

Instructions for building llama-qwen2vl-cli and model conversion are available in the draft PR. Alternatively, you can try the pre-converted 3B model available on Hugging Face.

Some Results

Image Savings
Image The image shows a bustling urban street scene. On the left side, there is a prominent statue of a man standing on a pedestal. The statue, dressed in a long coat and holding a document, is a focal point of the scene. The statue appears to be made of bronze and is situated in front of a building with classical architectural elements, such as columns and a cornice.
In the background, there are several tall buildings, including a prominent skyscraper. The buildings are adorned with American flags, indicating a sense of national pride and possibly a location in the United States, such as New York City. The flags are flying high from the rooftops and are also visible from various windows and balconies on the buildings.
The street is busy with pedestrians and vehicles. People are walking along the sidewalk, and there are streetlights and street signs, indicating a well-maintained urban area. There are also food trucks parked along the street, suggesting a lively and commercial area.
The overall atmosphere of the image is vibrant and dynamic, typical of a busy downtown area in a major city. The combination of historical and contemporary elements, such as the statue and modern skyscrapers, creates a rich tapestry of urban life.
Image The image depicts a serene beach scene featuring a person and a dog. The person, a woman, is sitting on the sandy beach facing the ocean. She is wearing a plaid shirt and appears to be smiling warmly. The dog, a light-colored Labrador Retriever, is sitting on the sand facing the woman. The dog's front paws are extended towards the woman, as if it is reaching out or offering something. The background shows a calm ocean with gentle waves crashing onto the shore, and the sky is clear with a soft light suggesting either early morning or late afternoon. The overall atmosphere of the image is peaceful and intimate, capturing a moment of connection between the person and the dog.

All caption are created with following cli command

./llama-qwen2vl-cli -m qwen25vl-3b-instruct.gguf --mmproj qwen25vl-vision.gguf -p "Describe this image." --image demo.jpg

HimariO avatar Mar 16 '25 10:03 HimariO