llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

[User] bad magic

Open vinitran opened this issue 1 year ago • 2 comments

I am running the latest code. i have no idea about bad magic error when running. my model is standard, i did convert by convert.py in llama.cpp source. Does anyone have any solution ?

vinitran avatar Jun 13 '23 08:06 vinitran

there is problem with your model...

reddiamond1234 avatar Jun 14 '23 12:06 reddiamond1234

I am running the latest code. i have no idea about bad magic error when running. my model is standard, i did convert by convert.py in llama.cpp source. Does anyone have any solution ?

It's likely either your model is outdated, or incompatible.

What model are you trying to run specifically?

ghost avatar Jun 14 '23 17:06 ghost

ggml-vic7b-q5_1.bin from https://huggingface.co/eachadea/ggml-vicuna-7b-1.1/tree/main

it can load in https://github.com/Atome-FE/llama-node , but not in latest compiled ./chat_mac in this repo.

Maybe its old format only works with old llama.cpp code in Atome-FE/llama-node...

linonetwo avatar Jul 19 '23 06:07 linonetwo

Also can't load openbuddy-openllama-13b-v7-q4_K.bin and openllama-7b-v5-q5_K.bin from https://huggingface.co/OpenBuddy/openbuddy-ggml/tree/main

make
sysctl: unknown oid 'hw.optional.arm64'
I llama.cpp build info:
I UNAME_S:  Darwin
I UNAME_P:  i386
I UNAME_M:  x86_64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS:   -framework Accelerate
I CC:       Apple clang version 14.0.3 (clang-1403.0.22.14.1)
I CXX:      Apple clang version 14.0.3 (clang-1403.0.22.14.1)

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE   -c ggml.c -o ggml.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -c utils.cpp -o utils.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread chat.cpp ggml.o utils.o -o chat  -framework Accelerate
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread quantize.cpp ggml.o utils.o -o quantize  -framework Accelerate

./chat --model "/Users/linonetwo/Documents/languageModel/ggml-vic7b-q5_1.bin"  --threads 4  --prompt "介绍一下Tiddlywiki"
main: seed = 1689748090
llama_model_load: loading model from '/Users/linonetwo/Documents/languageModel/ggml-vic7b-q5_1.bin' - please wait ...
llama_model_load: invalid model file '/Users/linonetwo/Documents/languageModel/ggml-vic7b-q5_1.bin' (bad magic)
main: failed to load model from '/Users/linonetwo/Documents/languageModel/ggml-vic7b-q5_1.bin'
./chat --model "/Users/linonetwo/Documents/languageModel/openllama-7b-v5-q5_K.bin"  --threads 4  --prompt "介绍一下Tiddlywiki"
main: seed = 1689748967
llama_model_load: loading model from '/Users/linonetwo/Documents/languageModel/openllama-7b-v5-q5_K.bin' - please wait ...
llama_model_load: invalid model file '/Users/linonetwo/Documents/languageModel/openllama-7b-v5-q5_K.bin' (bad magic)
main: failed to load model from '/Users/linonetwo/Documents/languageModel/openllama-7b-v5-q5_K.bin'
./chat --model "/Users/linonetwo/Documents/languageModel/openbuddy-openllama-13b-v7-q4_K.bin"  --threads 4  --prompt "介绍一下Tiddlywiki"
main: seed = 1689748989
llama_model_load: loading model from '/Users/linonetwo/Documents/languageModel/openbuddy-openllama-13b-v7-q4_K.bin' - please wait ...
llama_model_load: invalid model file '/Users/linonetwo/Documents/languageModel/openbuddy-openllama-13b-v7-q4_K.bin' (bad magic)
main: failed to load model from '/Users/linonetwo/Documents/languageModel/openbuddy-openllama-13b-v7-q4_K.bin'

linonetwo avatar Jul 19 '23 06:07 linonetwo

ggml-vic7b-q5_1.bin from https://huggingface.co/eachadea/ggml-vicuna-7b-1.1/tree/main

Outdated.

Also can't load openllama-7b-v5-q5_K.bin

openllama-7b-v5-q5_K.bin loads for me - it likely requires an updated version of llama.cpp - I'm running latest build.

ghost avatar Jul 19 '23 13:07 ghost

Thanks, outdated code load outdated model. Latest load latest.

I have to PR update llama.cpp in https://github.com/Atome-FE/llama-node to fix this. If @hlhr202 is still there to accept PR.

linonetwo avatar Jul 20 '23 11:07 linonetwo

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Apr 10 '24 01:04 github-actions[bot]