llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Docker Images exit with exitcode 132 on AMD systems

Open LLukas22 opened this issue 2 years ago • 0 comments

The llama.cpp:light docker image exits with exitcode 132 when loading the model on both my AMD based systems. Hinting at a missing cpu instruction. If i try to run the container on an Intel based system i own it works as expected.

Command used: docker run -v [MODELPATH]:/models ghcr.io/ggerganov/llama.cpp:light -m /models/ggjt-model.bin -p "Building a website can be done in 10 simple steps:" -n 512

Output:

2023-04-13 09:56:26 main: seed = 1681372586
2023-04-13 09:56:26 llama.cpp: loading model from /models/ggjt-model.bin
EXITED (132)

Im using relatively new hardware so avx and avx-2 support shouldn't be a problem (Ryzen 7 3700X & Ryzen 7 5700U) If i build the images locally they run as expected without the instruction set error. I also tried playing around a bit with the QEMU settings in the docker build process but had no success as mentioned in this issue abetlen/llama-cpp-python#70.

LLukas22 avatar Apr 13 '23 08:04 LLukas22