llama.cpp
llama.cpp copied to clipboard
Upgrade init_tensor API to return a ggml_status
To prepare for an 'abort-free' ggml, as agreeed with Diego in the ggml repo, upgrade the backend init_tensor APIs to return a ggml_status.
Make sure to read the contributing guidelines before submitting a PR