llama.cpp
llama.cpp copied to clipboard
Update ggml-backend.cpp
Change MAX GPU+CPU from 16 to 64
Make sure to read the contributing guidelines before submitting a PR