LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

localAI run on GPU

Open shengkaixuan opened this issue 1 year ago • 0 comments

https://github.com/ggerganov/llama.cpp#blas-build seems like llama.cpp can run models on GPU,will localai support that ??

shengkaixuan avatar Apr 29 '23 13:04 shengkaixuan