DeepSeek-VL icon indicating copy to clipboard operation
DeepSeek-VL copied to clipboard

How to convert it to GGUF/GGML for general use?

Open YuanfengZhang opened this issue 1 year ago • 0 comments
trafficstars

Sorry for this dummy question but I did search for some answers and try before. Using llama.cpp

python ./convert-hf-to-gguf.py \
../../deepseek-vl-7b-chat \
--outtype f16 \
--outfile ../../deepseek-vl-7b-chat/deepseek-v1-7b-chat.gguf

returned

Loading model: deepseek-vl-7b-chat Traceback (most recent call last): File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 2099, in main() File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 2079, in main model_class = Model.from_model_architecture(hparams["architectures"][0]) File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 215, in from_model_architecture raise NotImplementedError(f'Architecture {arch!r} not supported!') from None NotImplementedError: Architecture 'MultiModalityCausalLM' not supported!

So is there any feasible method? Thx.

YuanfengZhang avatar Mar 27 '24 06:03 YuanfengZhang