MetaGPT icon indicating copy to clipboard operation
MetaGPT copied to clipboard

llm stream_response alway been print regardless of debug flag

Open azurewtl opened this issue 10 months ago • 3 comments

Feature description log_llm_stream is been use for all llm_provider, and they all call the following log_llm_stream function stream response with async function often result in messed up logging.

logger = define_log_level()

def log_llm_stream(msg):
    _llm_stream_log(msg)

def set_llm_stream_logfunc(func):
    global _llm_stream_log
    _llm_stream_log = func

_llm_stream_log = partial(print, end="")

Your Feature currently loguru doesn't provide a stable/legit way to get log_level, so whenever we use define_log_level, we need a global variable to keep track of log_level, and mute log_llm_stream corresponding. If that's a feasible way to do so, I could create a PR

azurewtl avatar Mar 26 '24 05:03 azurewtl

@azurewtl Thank you for bringing up this issue. I'd like to clarify your suggestion: Are you proposing to add a new log_level configuration and treat llm stream output as debug level output?

shenchucheng avatar Mar 26 '24 08:03 shenchucheng

@shenchucheng yes!

azurewtl avatar Mar 26 '24 10:03 azurewtl

@azurewtl It sounds pretty good, we'd love to see your PR.

shenchucheng avatar Mar 27 '24 06:03 shenchucheng