llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Error converting SmolLM-1.7B-Instruct

Open 0wwafa opened this issue 1 year ago • 1 comments

python /content/llama.cpp/convert_hf_to_gguf.py --outtype f16 SmolLM-1.7B-Instruct --outfile SmolLM-1.7B-Instruct.f16.gguf

INFO:hf-to-gguf:Set model tokenizer
WARNING:hf-to-gguf:

WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:** WARNING: The BPE pre-tokenizer was not recognized!
WARNING:hf-to-gguf:**          There are 2 possible reasons for this:
WARNING:hf-to-gguf:**          - the model has not been added to convert_hf_to_gguf_update.py yet
WARNING:hf-to-gguf:**          - the pre-tokenization config has changed upstream
WARNING:hf-to-gguf:**          Check your model files and convert_hf_to_gguf_update.py and update them accordingly.
WARNING:hf-to-gguf:** ref:     https://github.com/ggerganov/llama.cpp/pull/6920
WARNING:hf-to-gguf:**
WARNING:hf-to-gguf:** chkhsh:  855059429035d75a914d1eda9f10a876752e281a054a7a3d421ef0533e5b6249
WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:

0wwafa avatar Jul 18 '24 12:07 0wwafa

Also, for those who are interested, chatllm.cpp supports this.

foldl avatar Jul 20 '24 09:07 foldl

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Sep 03 '24 01:09 github-actions[bot]