unsloth
unsloth copied to clipboard
[TEMP FIX] Ollama / llama.cpp: cannot find tokenizer merges in model file [duplicate]
Hi, i tried finetuning both llama 3.1-8b-instruct and llama 3-8b-instruct following the notebook you provided here.
The training phase completed without errors and i generated the gguf quantized at 8-bit.
However i cannot load the gguf in LLM Studio for this error:
"llama.cpp error: 'error loading model vocabulary: cannot find tokenizer merges in model file\n'"
Did you have this kind of problem ?
I finetuned with success both mistral-instruct and mistral-small-instruct without problems. I'm experiencing issues only with llama