llama.cpp
llama.cpp copied to clipboard
llama : expose llama_model_n_head_kv in the API
It's useful to be able to have this from the library layer as it's a key parameter of the model (e.g. to figure out how much KV cache memory is needed).