llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Feature Request: when llama.cpp can support convert qwen2.5 VL 7B/72B model to gguf?

Open sooit opened this issue 9 months ago • 8 comments

Prerequisites

  • [x] I am running the latest code. Mention the version if possible as well.
  • [x] I carefully followed the README.md.
  • [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [x] I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

When llama.cpp can support convert qwen2.5 VL 7B/72B model files to gguf file? I use command as below: python convert_hf_to_gguf.py /home/jason/.cache/modelscope/hub/Qwen/Qwen2.5-VL-72B-Instruct --outfile qwen2_5_vl_72b_instruct.gguf --outtype f16

Got error messages as below: INFO:hf-to-gguf:Loading model: Qwen2.5-VL-72B-Instruct ERROR:hf-to-gguf:Model Qwen2_5_VLForConditionalGeneration is not supported

Motivation

I hope llama.cpp can quickly support different llm model as soon as possible.

Possible Implementation

No response

sooit avatar Jan 31 '25 11:01 sooit