LocalAI
LocalAI copied to clipboard
localAI run on GPU
https://github.com/ggerganov/llama.cpp#blas-build seems like llama.cpp can run models on GPU,will localai support that ??