txtsd
txtsd
@xiota Just to be sure, you're asking me to: 1. Build `llama.cpp*` in a generic manner by using the build options from `libggml-git` 2. Remove the built libggml and use...
@xiota I made the `llama.cpp*` packages use baseline x86_64 using the custom `libggm-git` package as a reference. Please see if it's sufficient, and let me know. I'm not yet sure...
I will investigate further, thanks!
@xiota Thanks for your offer of help. I made all the packages build statically. You were right, removing the `.a` files was the way to go. I was under the...
We might have a big problem with statically built libggml in llama.cpp (at least for CUDA) See: https://aur.archlinux.org/packages/llama.cpp-cuda#comment-998638