Ed Wios
Ed Wios
Same, `./chatLLaMa: line 53: 99012 Segmentation fault: 11 ./main $GEN_OPTIONS --model "$MODEL" --threads "$N_THREAD" --n_predict "$N_PREDICTS" --color --interactive --reverse-prompt "${USER_NAME}:" --prompt "` [main-2023-03-24-155839.ips.zip](https://github.com/ggerganov/llama.cpp/files/11064225/main-2023-03-24-155839.ips.zip)
Yippy! Commit 2a2e63ce0503d9bf3e55283e40a052c78c1cc3a8 did fix the issue beautifully! Thank you!!
No issue compiling here on a similar machine: ``` [soro:/Users … l/pytorch/llama.cpp] [pytorch-m1] master* 13d0h13m58s ± make I llama.cpp build info: I UNAME_S: Darwin I UNAME_P: arm I UNAME_M: arm64...
Maybe not the same as #317 After a long conversation, I have also got a segmentation fault: ``` ggml_new_tensor_impl: not enough space in the context's memory pool (needed 537259744, available...
[main-2023-03-22-161321.ips.zip](https://github.com/ggerganov/llama.cpp/files/11041861/main-2023-03-22-161321.ips.zip) Here is the crash log.
There are already related discussions and attempts here: https://github.com/ggerganov/llama.cpp/issues/172 and an implementation (using the original LLaMA checkpoints) here: https://github.com/tloen/alpaca-lora#inference-generatepy If Lora can be made to use with q4 it'd be...
Tried the above with the nsfw.299x299.h5 model, except the first one, which it has successfully identified as a drawing, the rests all mistakenly identified as porn with a very high...