Feature Request: Add support of convert.py for model Qwen2.5-Omni-7B
Prerequisites
- [x] I am running the latest code. Mention the version if possible as well.
- [x] I carefully followed the README.md.
- [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [x] I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
Hope this "Qwen2_5OmniModel" can be supported, is this pytorch issue? INFO:hf-to-gguf:Loading model: Qwen2.5-Omni-7B ERROR:hf-to-gguf:Model Qwen2_5OmniModel is not supported
The model git lfs link: https://cnb.cool/ai-models/Qwen/Qwen2.5-Omni-7B.git
Motivation
of course the more models type support, the better.
Possible Implementation
No response
+1
Good for the project to get used to omnimodality, as L4 will also be an omnimodal model.
+1
Related to https://github.com/ggml-org/llama.cpp/issues/12673
+1
+1
+1
+1
This issue was closed because it has been inactive for 14 days since being marked as stale.