MetaGPT
MetaGPT copied to clipboard
llm stream_response alway been print regardless of debug flag
Feature description
log_llm_stream
is been use for all llm_provider, and they all call the following log_llm_stream
function
stream response with async function often result in messed up logging.
logger = define_log_level()
def log_llm_stream(msg):
_llm_stream_log(msg)
def set_llm_stream_logfunc(func):
global _llm_stream_log
_llm_stream_log = func
_llm_stream_log = partial(print, end="")
Your Feature
currently loguru
doesn't provide a stable/legit way to get log_level
, so whenever we use define_log_level
, we need a global variable to keep track of log_level, and mute log_llm_stream corresponding.
If that's a feasible way to do so, I could create a PR
@azurewtl Thank you for bringing up this issue. I'd like to clarify your suggestion: Are you proposing to add a new log_level configuration and treat llm stream output as debug level output?
@shenchucheng yes!
@azurewtl It sounds pretty good, we'd love to see your PR.