llama_index
llama_index copied to clipboard
is there a way to disable logs(print statements)?
not for now (setting verbose=False is the safest bet but i know some indices still have print statements). i'll investigate how to make printing logs better.
in the meantime you can do something hacky like this: https://stackoverflow.com/questions/8391411/how-to-block-calls-to-print
Will you be interested in a PR for this? Do you have any preferred approach for something like this? I think a good way is to use the inbuilt logging module.
@triptu would love your contribution if you have time! yeah i agree, i've so far taken the easiest route of printing but having an explicit logger might be useful (might also be good to think about what to do with the verbose
option scattered everywhere)
Going to tackle this @triptu please lmk if you have already started.
Approach:
- Think I am going to add a root logger (logging library) that gets pulled at class instantiation in the various base classes
- Thinking of adding root logging config at the module level and removing all the verbose=True arguments everywhere
- Will be able to set GPT_INDEX_LOG_LEVEL=foo as an environment variable or specify it in code.
Notes:
as of 0.4.0, this issue should be resolved
@jerryjliu I do not think this is resolved yet. I still get the following logs:
INFO:root:> [query] Total LLM token usage: 101 tokens
INFO:root:> [query] Total embedding token usage: 1 tokens
When trying to change logging configuration, I get more logs, in addition to these (repeated).
I even tried things like the following with no effects:
with open(os.devnull, "w") as f, contextlib.redirect_stdout(f):
index.query("<QUERY>")
As of 0.4.29, root logger calls have been replaced with module logger calls.
So you should see something like
INFO:llama_index.token_counter.token_counter:> [query] Total LLM token usage: 101 tokens
INFO:llama_index.token_counter.token_counter:> [query] Total embedding token usage: 1 tokens
In your logs now.
To disable these you can add something like:
logger = logging.getLogger('llama_index')
logger.setLevel(logging.WARNING)
That will result in llama_index only logging warnings.
If it's a specific submodule you can increase the verbosity specifically
logger = logging.getLogger('llama_index.token_counter')
logger.setLevel(logging.WARNING)