llama.cpp
llama.cpp copied to clipboard
Misc. bug: server metrics sometimes return "-nan" values
When enabling the metrics endpoint, and before any completion request has been made, the returned metrics include a -nan
value:
llamacpp:n_busy_slots_per_decode -nan
Looking at the source code, it looks like it may be caused by an unitialized value. Simply setting this variable to zero might fix the problem, and make Prometheus happy. https://github.com/ggerganov/llama.cpp/blob/3d68f034dad53f0f27ad626b2732ef48fbcea4ee/examples/server/server.cpp#L1107
llama-cpp version (from docker)
root@llama-cpp-69d8cb4c8d-4tlc6:/app# ./llama-server --version
load_backend: loaded CPU backend from ./libggml-cpu-haswell.so
version: 4687 (b9ab0a4d)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu