llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Does llama.cpp deploy support mutil_nodes mutil-GPUs

Open Tian14267 opened this issue 1 week ago • 1 comments

I have two machine with 2 * 8 * A800, want deploy a GGUF model with two machines。 Does llama.cpp deploy support mutil_nodes mutil-GPUs , if OK, How can I do this ?

Tian14267 avatar Feb 14 '25 09:02 Tian14267