llama.cpp
llama.cpp copied to clipboard
[User] bad magic
I am running the latest code. i have no idea about bad magic error when running. my model is standard, i did convert by convert.py in llama.cpp source. Does anyone have any solution ?
there is problem with your model...
I am running the latest code. i have no idea about bad magic error when running. my model is standard, i did convert by convert.py in llama.cpp source. Does anyone have any solution ?
It's likely either your model is outdated, or incompatible.
What model are you trying to run specifically?
ggml-vic7b-q5_1.bin
from https://huggingface.co/eachadea/ggml-vicuna-7b-1.1/tree/main
it can load in https://github.com/Atome-FE/llama-node , but not in latest compiled ./chat_mac
in this repo.
Maybe its old format only works with old llama.cpp code in Atome-FE/llama-node...
Also can't load openbuddy-openllama-13b-v7-q4_K.bin
and openllama-7b-v5-q5_K.bin
from https://huggingface.co/OpenBuddy/openbuddy-ggml/tree/main
make
sysctl: unknown oid 'hw.optional.arm64'
I llama.cpp build info:
I UNAME_S: Darwin
I UNAME_P: i386
I UNAME_M: x86_64
I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS: -framework Accelerate
I CC: Apple clang version 14.0.3 (clang-1403.0.22.14.1)
I CXX: Apple clang version 14.0.3 (clang-1403.0.22.14.1)
cc -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE -c ggml.c -o ggml.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -c utils.cpp -o utils.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread chat.cpp ggml.o utils.o -o chat -framework Accelerate
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread quantize.cpp ggml.o utils.o -o quantize -framework Accelerate
./chat --model "/Users/linonetwo/Documents/languageModel/ggml-vic7b-q5_1.bin" --threads 4 --prompt "介绍一下Tiddlywiki"
main: seed = 1689748090
llama_model_load: loading model from '/Users/linonetwo/Documents/languageModel/ggml-vic7b-q5_1.bin' - please wait ...
llama_model_load: invalid model file '/Users/linonetwo/Documents/languageModel/ggml-vic7b-q5_1.bin' (bad magic)
main: failed to load model from '/Users/linonetwo/Documents/languageModel/ggml-vic7b-q5_1.bin'
./chat --model "/Users/linonetwo/Documents/languageModel/openllama-7b-v5-q5_K.bin" --threads 4 --prompt "介绍一下Tiddlywiki"
main: seed = 1689748967
llama_model_load: loading model from '/Users/linonetwo/Documents/languageModel/openllama-7b-v5-q5_K.bin' - please wait ...
llama_model_load: invalid model file '/Users/linonetwo/Documents/languageModel/openllama-7b-v5-q5_K.bin' (bad magic)
main: failed to load model from '/Users/linonetwo/Documents/languageModel/openllama-7b-v5-q5_K.bin'
./chat --model "/Users/linonetwo/Documents/languageModel/openbuddy-openllama-13b-v7-q4_K.bin" --threads 4 --prompt "介绍一下Tiddlywiki"
main: seed = 1689748989
llama_model_load: loading model from '/Users/linonetwo/Documents/languageModel/openbuddy-openllama-13b-v7-q4_K.bin' - please wait ...
llama_model_load: invalid model file '/Users/linonetwo/Documents/languageModel/openbuddy-openllama-13b-v7-q4_K.bin' (bad magic)
main: failed to load model from '/Users/linonetwo/Documents/languageModel/openbuddy-openllama-13b-v7-q4_K.bin'
ggml-vic7b-q5_1.bin
from https://huggingface.co/eachadea/ggml-vicuna-7b-1.1/tree/main
Outdated.
Also can't load
openllama-7b-v5-q5_K.bin
openllama-7b-v5-q5_K.bin loads for me - it likely requires an updated version of llama.cpp - I'm running latest build.
Thanks, outdated code load outdated model. Latest load latest.
I have to PR update llama.cpp in https://github.com/Atome-FE/llama-node to fix this. If @hlhr202 is still there to accept PR.
This issue was closed because it has been inactive for 14 days since being marked as stale.