FileNotFoundError when running convert.py script
What is the issue?
I am trying to convert a model using the convert.py script provided in the llama.cpp repository. However, I am encountering a FileNotFoundError. The script is unable to find the tokenizer file, even though tokenizer.json is present in the model directory. The error message is as follows:
python /mnt/part1/ollama/llm/llama.cpp/convert.py /mnt/part1/models/Qwen1.5-7B-Chat --outtype f16 --outfile converted.bin
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00001-of-00004.safetensors
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00001-of-00004.safetensors
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00002-of-00004.safetensors
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00003-of-00004.safetensors
INFO:convert:Loading model file /mnt/part1/models/Qwen1.5-7B-Chat/model-00004-of-00004.safetensors
INFO:convert:model parameters count : 7721324544 (8B)
INFO:convert:params = Params(n_vocab=151936, n_embd=4096, n_layer=32, n_ctx=32768, n_ff=11008, n_head=32, n_head_kv=32, n_experts=None, n_experts_used=None, f_norm_eps=1e-06, rope_scaling_type=None, f_rope_freq_base=1000000.0, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=<GGMLFileType.MostlyF16: 1>, path_model=PosixPath('/mnt/part1/models/Qwen1.5-7B-Chat'))
Traceback (most recent call last):
File "/mnt/part1/ollama/llm/llama.cpp/convert.py", line 1714, in
/mnt/part1/models/Qwen1.5-7B-Chat/ ├── config.json ├── LICENSE ├── merges.txt ├── model-00001-of-00004.safetensors ├── model-00002-of-00004.safetensors ├── model-00003-of-00004.safetensors ├── model-00004-of-00004.safetensors ├── model.safetensors.index.json ├── README.md ├── tokenizer_config.json ├── tokenizer.json ├── vocab.json
OS
Linux
GPU
Nvidia
CPU
Intel
Ollama version
ollama version is 0.1.33