alpaca.cpp icon indicating copy to clipboard operation
alpaca.cpp copied to clipboard

Segmentation Fault with 7b on Raspberry Pi 3

Open leafyus opened this issue 2 years ago • 5 comments

leafy@raspberrypi:~/alpaca.cpp $ ./chat main: seed = 1681116282 llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ... llama_model_load: ggml ctx size = 6065.34 MB Segmentation fault

Tried to run the 7b Alpaca Model on my Raspberry Pi 3, but getting a Segmentation Fault every time. Compiled from source code. The RP3 has 4 GBs of RAM, is that the problem?

leafyus avatar Apr 10 '23 08:04 leafyus

Same here, segmentation fault, but on an old Linux x86_64 elitebook laptop. In this case it was -mavx causing the error. This compiles, but it runs pretty slow without mavx: https://github.com/ggerganov/llama.cpp/issues/107

gcc -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -msse3 -c ggml.c -o ggml.o g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread chat.cpp ggml.o utils.o -o chat g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread quantize.cpp ggml.o utils.o -o quantize

themanyone avatar Apr 10 '23 09:04 themanyone

same problem here. working with a aws ec2. having the problem even without building from source. The file size is 4.0GB? may be the problem...

BlueveryPi avatar Apr 16 '23 13:04 BlueveryPi

Hello, I also have the same issue with a Raspberry Pi 4, 4GB

I'm not sure the file size is the issue, as this blog got it working for a RPi 5 also with the same 4GB of RAM.

yaozakai avatar Feb 25 '24 07:02 yaozakai

Might want to trim down any running processes, also. Running non-graphical session, perhaps?

themanyone avatar Feb 26 '24 22:02 themanyone

Yeah so I run headless. I ended up using a 2-bit model instead that was 2+GB in size and it seemed to work.

yaozakai avatar Feb 27 '24 23:02 yaozakai