loguru
loguru copied to clipboard
Logging error in Loguru Handler
I'm experiencing the following error, after some attempts and searching in previous issues I haven't found a solution yet:
--- Logging error in Loguru Handler #2 ---
Record was: {'elapsed': datetime.timedelta(seconds=11, microseconds=363235), 'exception': None, 'extra': {}, 'file': (name='logger.py', path='/Users/foo/code/foo/bar/foo/app/logger.py'), 'function': 'emit', 'level': (name='INFO', no=20, icon='ℹ️'), 'line': 40, 'message': 'Cleaning created processes.', 'module': 'logger', 'name': 'logger', 'process': (id=21421, name='MainProcess'), 'thread': (id=8544645696, name='MainThread'), 'time': datetime(2024, 10, 14, 12, 54, 53, 251057, tzinfo=datetime.timezone(datetime.timedelta(seconds=7200), 'CEST'))}
Traceback (most recent call last):
File "/Users/foo/Library/Caches/pypoetry/virtualenvs/foo-py3.12/lib/python3.12/site-packages/loguru/_handler.py", line 206, in emit
self._sink.write(str_record)
File "/Users/foo/Library/Caches/pypoetry/virtualenvs/foo-py3.12/lib/python3.12/site-packages/loguru/_simple_sinks.py", line 16, in write
self._stream.write(message)
ValueError: I/O operation on closed file.
--- End of logging error ---
My logger.py file:
import logging
import sys
from loguru import logger
from models import (
FOOExceptionEvent,
Event,
PromptEvent,
Reason,
Service,
)
from opencensus.ext.azure.log_exporter import AzureEventHandler
from settings import settings
class InterceptHandler(logging.Handler):
def emit(self, record: logging.LogRecord) -> None:
try:
level = logger.level(record.levelname).name
except ValueError:
level = record.levelno
frame, depth = logging.currentframe(), 0
while frame and frame.f_code.co_filename == logging.__file__:
frame = frame.f_back
depth += 1
logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage())
def setup_logging(level: str = settings().log_level.upper()) -> None:
logging.getLogger("asyncio").setLevel(logging.WARNING)
logging.getLogger("hpack").setLevel(logging.WARNING)
logging.getLogger("urllib3").setLevel(logging.WARNING)
logging.getLogger("uvicorn.access").setLevel(logging.WARNING)
logging.getLogger("httpcore.connection").setLevel(logging.WARNING)
logging.getLogger("httpcore.http11").setLevel(logging.WARNING)
logging.getLogger("httpx").setLevel(logging.WARNING)
logging.getLogger("azure.core.pipeline.policies.http_logging_policy").setLevel(logging.WARNING)
logging.getLogger("azure.identity").setLevel(logging.WARNING)
logging.captureWarnings(True)
logging.basicConfig(handlers=[InterceptHandler()], level=logging.NOTSET, force=True)
logging.getLogger("azure.monitor.opentelemetry.exporter.export").setLevel(logging.WARNING)
logging.getLogger("urllib3.connectionpool").setLevel(logging.WARNING)
logging.getLogger("uvicorn.access").handlers = [InterceptHandler()]
for _log in [
"uvicorn",
"fastapi",
]:
_logger = logging.getLogger(_log)
_logger.handlers = [InterceptHandler()]
if settings().azure_application_insights_active:
azure_event_handler = AzureEventHandler(connection_string=settings().azure_opentelemetry_connection_string)
for event_type in (PromptEvent, FOOExceptionEvent):
event_logger = logging.getLogger(event_type.get_name())
event_logger.addHandler(azure_event_handler)
event_logger.setLevel(logging.INFO)
log_event(PromptEvent("initial", vehicle_id="vehicle"))
log_event(FOOExceptionEvent(Service.AZURE_OPENAI, Reason.UNAVAILABLE))
logger.remove()
logger.add(sink=sys.stdout, level=level, format=settings().log_format)
def log_event(event: Event) -> None:
if settings().azure_application_insights_active:
event_logger = logging.getLogger(event.get_name())
event_logger.info(event.__class__.__name__, extra=event.get_custom_dimensions())
I'd be grateful for any hints that point me in the right direction, thanks!
Hi @goseind.
As you guessed, this error occurs because the logger is trying to write to a stream object that has been closed, which is invalid (see IOBase.close()).
When removing a file handler, Loguru will automatically close the corresponding stream object that was opened. However, when dealing with non-file streams like sys.stderr or sys.stdout, then Loguru will not close them. This would be very inconvenient and incorrect (as the stream is global, it must remain usable after the sink has been removed).
The error you're facing seems to come from the handler that uses sys.stdout (according to the configuration you shared). Since Loguru does not close such stream by itself, this means something else closed the stream while it was still in use by the logger.
At this point, it's unfortunately difficult for me to identify what in your environment or your libraries is causing this effect.
Some tools take the liberty of replacing sys.stdout with their own stream-like object. In this way, they can capture what is printed on the standard output. This is the case of some libraries, IDEs and Cloud platforms.
The problem is that the logger will use this wrapped stream as well. But if the third-party tool happens to clean up and close the stream, then the logger is left with an unusable sink.
Here is a simplified example to illustrate the issue:
from contextlib import contextmanager
import sys
import io
from loguru import logger
@contextmanager
def redirect_stdout(new_target):
old_target, sys.stdout = sys.stdout, new_target
try:
yield new_target
finally:
sys.stdout = old_target
new_target.close()
if __name__ == "__main__":
logger.remove()
f = io.StringIO()
with redirect_stdout(f):
logger.add(sys.stdout)
logger.info("Hello")
captured_output = f.getvalue()
print(f"Captured output: {captured_output}")
# ValueError: I/O operation on closed file.
logger.info("World")
Here's what you can possibly do in your situation:
- identify the tool that is manipulating
sys.stdout, - make sure the
loggeris always fully re-initialized whenever your environment is susceptible to cleanup the wrappedsys.stdout; - configure the
loggerwithlogger.add(lambda m: sys.stdout.write(m))instead oflogger.add(sys.stdout), so that the stream is dynamically retrieved and therefore not sensitive to changes.