llama.cpp
llama.cpp copied to clipboard
Misc. bug: llama-qwen2vl-cli: ignores --log* options
Name and Version
‰ ./bin/llama-qwen2vl-cli --version version: 4391 (9ba399df) built with cc (Gentoo Hardened 14.2.1_p20241221 p6) 14.2.1 20241221 for x86_64-pc-linux-gnu
Operating systems
No response
Which llama.cpp modules do you know to be affected?
No response
Problem description & steps to reproduce
Use --log-file /dev/null
or --log-verbosity -100
. Note lines like clip_model_load: model name: Qwen2-VL-7B-Instruct
still being produced in stdout.
There's seemingly no way to isolate the model output from the miscellaneous messages of llama.cpp.
First Bad Commit
No response
Relevant log output
No response