alpaca.cpp icon indicating copy to clipboard operation
alpaca.cpp copied to clipboard

Illegal instruction (core dumped)

Open kwcooper opened this issue 2 years ago • 2 comments

When running ./chat:

main: seed = 1680031538
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 10959.49 MB
Illegal instruction (core dumped)

Putting it through gdb:

(gdb) run
Starting program: /chat 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
main: seed = 1680031597
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 10959.49 MB

Program received signal SIGILL, Illegal instruction.
0x000055555556e6c2 in ggml_new_tensor_impl ()

and

(gdb) bt full
#0  0x000055555556e832 in ggml_new_tensor_impl ()
No symbol table info available.
#1  0x000055555556eb64 in ggml_new_tensor_2d ()
No symbol table info available.
#2  0x0000555555560d0b in llama_model_load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, llama_model&, gpt_vocab&, int) ()
No symbol table info available.
#3  0x000055555555ae67 in main ()
No symbol table info available.

Perhaps because I'm running on AMD Opteron CPU?

kwcooper avatar Mar 29 '23 02:03 kwcooper

I'm on IntelXeon E5-2643 - same problem

szhatchenko avatar Mar 31 '23 12:03 szhatchenko

This works for my old 4th gen i7-3820

https://github.com/SiemensSchuckert/alpaca.cpp

fancellu avatar Mar 31 '23 17:03 fancellu