loguru icon indicating copy to clipboard operation
loguru copied to clipboard

Write queued messages to sink after explicit logger.flush(requeue: Optional[bool] = False)

Open reneleonhardt opened this issue 5 months ago • 1 comments

My use case: I would like to print multiple tqdm progress bars and log messages at the "same" time in an asyncio app. Now all writes to stdout/stderr are interleaved while tqdm is refreshing the progress bars multiple times per second.

I just saw the logger.add(enqueue=True) option, could this be enhanced in _queued_writer() to wait much longer with writing/flushing to the sink until a command like logger.flush() would be explicitly called? In this case, after all progress bars have been closed (and maybe even removed with leave=False), when tqdm stops updating them. Then every message would be logged just normally long after they have been recorded, so original timestamps would be written when the event has been queued minutes ago. flush(requeue=True) would re-enable queuing, otherwise the logger would automatically configure(enqueue=False) on itself to stop queuing from now on for future log messages.

After debugging I found a workaround for me to retain the default message format and color / escape codes, sorry for the noise, thank you for this amazing library!

import io

from loguru import logger

logger.configure(handlers=[dict(sink=(stringio := io.StringIO()), colorize=True)])
logger.info("First")
print("Second")
print(stringio.getvalue())
# Second
# 2024-09-21 00:00:00.000 | INFO     | __main__:<module>:6 - First

reneleonhardt avatar Sep 21 '24 15:09 reneleonhardt