JackJollimore
JackJollimore
Hi, I want to compile llama.cpp with blas in termux. # Expected Behavior Currently, I use make LLAMA_OPENBLAS=1, and it fails to make. I expect it to locate one of...
hi, my device is samsung s10+ the application downloaded parameters, and I see a message that shows ready for chat. I type any message, and send. I see my message...
Hi, Increasing the # in -ngl has the effect of lower performance. I built 2d7bf11 with CLBlast. Here's ./main; ``` LD_LIBRARY_PATH=/vendor/lib64 ./main -m ~/llama.cpp/models/Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin --color -c 2048 --keep -1 -t...
- [x] I am running the latest code. 794db3e - [x] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md). - [x] I [searched using keywords relevant to my issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/filtering-and-searching-issues-and-pull-requests) to make sure that...