ollama
ollama copied to clipboard
Support Qwen VL
Could you please support Qwen VL model
Hope to increase the support of Qwen VL
This is really a must have model for Chinese users, as it's the SOTA of vision model which can recognize Chinese characters. But I think that won't be easy, because there is not GGUF format model yet on HuggingFace.
There is a int4 model. https://huggingface.co/Qwen/Qwen-VL-Chat-Int4
@thesby ollama runs GGUF models only currently. Pytorch or safetensors model need to be converted to gguf firstly. https://github.com/ollama/ollama/blob/main/docs/import.md. But convert Qwen-VL to gguf format is not supported by llama.cpp yet.
Hope to increase the support of Qwen VL +1
hope + 1
+1
+1
+1
+1
This multimodal model is really great, supported, and it is recommended to adapt
+1
+1
+1
这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
llama.cpp 尚不支持将 Qwen-VL 转换为 gguf 格式,得去他们仓库下多 dd...
这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
+1
+1
Qwen VL works much better than LLava 1.6, would be good to be able to use it with ollama. The OCR is also much better and supports much more languages.