llama.cpp
llama.cpp copied to clipboard
Error converting SmolLM-1.7B-Instruct
python /content/llama.cpp/convert_hf_to_gguf.py --outtype f16 SmolLM-1.7B-Instruct --outfile SmolLM-1.7B-Instruct.f16.gguf
INFO:hf-to-gguf:Set model tokenizer
WARNING:hf-to-gguf:
WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:** WARNING: The BPE pre-tokenizer was not recognized!
WARNING:hf-to-gguf:** There are 2 possible reasons for this:
WARNING:hf-to-gguf:** - the model has not been added to convert_hf_to_gguf_update.py yet
WARNING:hf-to-gguf:** - the pre-tokenization config has changed upstream
WARNING:hf-to-gguf:** Check your model files and convert_hf_to_gguf_update.py and update them accordingly.
WARNING:hf-to-gguf:** ref: https://github.com/ggerganov/llama.cpp/pull/6920
WARNING:hf-to-gguf:**
WARNING:hf-to-gguf:** chkhsh: 855059429035d75a914d1eda9f10a876752e281a054a7a3d421ef0533e5b6249
WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:
Also, for those who are interested, chatllm.cpp supports this.
This issue was closed because it has been inactive for 14 days since being marked as stale.