llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Some logging output not captured with `llama_log_set`

Open martindevans opened this issue 4 months ago • 4 comments

I'm using llama.cpp as a library from C# (LLamaSharp).

If I set a log callback with llama_log_set that captures most of the output, so that I can suppress it or redirect it to a logging framework.

However, there are still some messages written out directly. For example:

ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA RTX A4500, compute capability 8.6, VMM: yes

I would expect all of the output to be captured by adding that one callback, so this seems like a bug.

I'm guessing this has happened because those messages are coming from ggml.c, instead of llama.cpp? If so, can the llama_log_set function be modified to hook the same callback into ggml as well?

martindevans avatar Feb 29 '24 13:02 martindevans