llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Feature Request: Add support of convert.py for model Qwen2.5-Omni-7B

Open nickhuang99 opened this issue 9 months ago • 6 comments

Prerequisites

  • [x] I am running the latest code. Mention the version if possible as well.
  • [x] I carefully followed the README.md.
  • [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [x] I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Hope this "Qwen2_5OmniModel" can be supported, is this pytorch issue? INFO:hf-to-gguf:Loading model: Qwen2.5-Omni-7B ERROR:hf-to-gguf:Model Qwen2_5OmniModel is not supported

The model git lfs link: https://cnb.cool/ai-models/Qwen/Qwen2.5-Omni-7B.git

Motivation

of course the more models type support, the better.

Possible Implementation

No response

nickhuang99 avatar Mar 29 '25 09:03 nickhuang99

+1

hwwxg avatar Mar 29 '25 13:03 hwwxg

Good for the project to get used to omnimodality, as L4 will also be an omnimodal model.

Dampfinchen avatar Mar 29 '25 16:03 Dampfinchen

+1

wl826214 avatar Mar 31 '25 08:03 wl826214

Related to https://github.com/ggml-org/llama.cpp/issues/12673

gianpaj avatar Apr 01 '25 20:04 gianpaj

+1

freedom-all avatar Apr 16 '25 08:04 freedom-all

+1

lidll avatar May 08 '25 10:05 lidll

+1

sinand99 avatar May 13 '25 12:05 sinand99

+1

IGNORANCEzzZ avatar May 26 '25 06:05 IGNORANCEzzZ

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Jul 10 '25 01:07 github-actions[bot]