pytest
pytest copied to clipboard
Handling of loggers with propagate=False
Hi,
There is a 2-year old issue on the old repo and inspired by #3003 I thought it might be easier if I at least add some reference here:
https://github.com/eisensheng/pytest-catchlog/issues/44
I suggest to remove the auto-added label os:mac, as this is an issue on Ubuntu 16.04 as well.
When using the caplog.records
or caplog.record_tuples
there is nothing captured for a logger that does not propagate up to the root
logger. When testing the log output, is there any work-around for this or does a solution require mocking of logging handlers?
Not sure, we only capture what we get from the root
logger... @thisch might give some input here.
So the simplest work around in a pytest test is to enable the propagation during the test, only for any test that must check the presence or content of a log message (other tests might confirm that propagation is disabled by default). If I had time, a PR on the docs might help, but here is what I have time for now, e.g.
def test_app_logger_content(caplog):
logger = logging.getLogger('app') # 'app' is already configured without propagation
logger.propagate = True # log capture only works for propagation to the root logger
logger.info('foo')
logger.error('err')
assert caplog.record_tuples == [
('app', logging.INFO, 'foo'),
('app', logging.ERROR, 'err')
]
def test_app_logger_does_not_propagate(caplog):
logger = logging.getLogger('app')
assert not logger.propagate
logger.info('foo')
logger.error('err')
assert not caplog.records
If you have only a few loggers which don't propagate, probably you can use a session-scoped autouse fixture to enable propagation for those loggers.
I think this is an issue. Enabling propagating for logger results in a possible false positive result of a test.
Say, I would like to ensure a message goes to exact logger I want it to go. There is no way at the moment to test that.
It would be helpful to be able to tell to caplog
what logger I'm interested exactly. So, it will capture that logger and everything that goes there.
@lig If we add support for specifying a logger in _pytest.logging.catching_logs
you can use the following code to tell caplog the logger you're interested in
import logging
from _pytest.logging import catching_logs
def test_app_logger_content_nocatchlog(caplog):
logger = logging.getLogger('app')
logger.propagate = False
logger.info('foo')
logger.error('err')
assert ('app', logging.INFO, 'foo') not in caplog.record_tuples
assert ('app', logging.ERROR, 'err') not in caplog.record_tuples
def test_app_logger_content_catchlog(caplog):
logger = logging.getLogger('app')
logger.propagate = False
caplog.set_level(logging.INFO)
with catching_logs(caplog.handler, logger=logger):
logger.info('foo')
logger.error('err')
assert caplog.record_tuples == [
('app', logging.INFO, 'foo'),
('app', logging.ERROR, 'err')
]
@thisch looks really useful
one thing. I'd assume this would work either
def test_app_logger_content_catchlog(caplog):
logger = logging.getLogger('app')
logger.propagate = False
caplog.set_level(logging.INFO)
with catching_logs(caplog.handler, logger=logger):
logger.info('foo')
logger.error('err')
assert caplog.record_tuples == [
('app', logging.INFO, 'foo'),
('app', logging.ERROR, 'err'),
]
notice that caplog
keeps records after catching_logs
context manager has exited
Yeh it would be ideal if there was a proper replacement in pytest for unittest.TestCase.assertLogs
, i.e. with pytest.logs(...
Can you not just copy what that does, i.e. temporarily overriding the propagate and handler?
class _AssertLogsContext(_BaseTestCaseContext):
"""A context manager used to implement TestCase.assertLogs()."""
LOGGING_FORMAT = "%(levelname)s:%(name)s:%(message)s"
def __init__(self, test_case, logger_name, level):
_BaseTestCaseContext.__init__(self, test_case)
self.logger_name = logger_name
if level:
self.level = logging._nameToLevel.get(level, level)
else:
self.level = logging.INFO
self.msg = None
def __enter__(self):
if isinstance(self.logger_name, logging.Logger):
logger = self.logger = self.logger_name
else:
logger = self.logger = logging.getLogger(self.logger_name)
formatter = logging.Formatter(self.LOGGING_FORMAT)
handler = _CapturingHandler()
handler.setFormatter(formatter)
self.watcher = handler.watcher
self.old_handlers = logger.handlers[:]
self.old_level = logger.level
self.old_propagate = logger.propagate
logger.handlers = [handler]
logger.setLevel(self.level)
logger.propagate = False
return handler.watcher
def __exit__(self, exc_type, exc_value, tb):
self.logger.handlers = self.old_handlers
self.logger.propagate = self.old_propagate
self.logger.setLevel(self.old_level)
if exc_type is not None:
# let unexpected exceptions pass through
return False
if len(self.watcher.records) == 0:
self._raiseFailure(
"no logs of level {} or higher triggered on {}"
.format(logging.getLevelName(self.level), self.logger.name))
This is unfortunately still relevant and the workaround mentioned above by @lig and @thisch, using the internal catching_logs()
function is no longer available. The "logger" parameter was removed in 3862b0b28d1dac1d9ea398fc6155ed5e6de5498c.
Are there any other ways to achieve the effect? I'm a bit puzzled how to fix our unit tests for a project where there is a logger which doesn't propagate to root.
@chr1st1ank I didn't expect that my cleanup commit removed support for catching logs of non-propagating loggers :-/. Solving this issue in a proper way doesn't seem to be straightforward, but I'll try to come up with a potential fix for it in the next weeks.
Great, yes the commit message did not look as if you would have expected this effect. Thanks for the quick response, let me know if I can be of any help.
I think for handling loggers with propagate=False
(or also for loggers with propagate=True
) we can use a new fixture (actually it doesn't have to be a fixture) that captures logs in a temporary LogCaptureHandler
class LogLocalCaptureFixture:
"""Provides access and control of log capturing."""
def __init__(self):
self.handler = LogCaptureHandler()
@contextmanager
def __call__(
self, level: int, logger: logging.Logger
) -> Generator[None, None, None]:
"""Context manager that sets the level for capturing of logs. After
the end of the 'with' statement the level is restored to its original
value.
:param int level: The level.
:param logging.Logger logger: The logger to update.
"""
# or keep the logger level and only change the handler level?
orig_level = logger.level
logger.setLevel(level)
logger.addHandler(self.handler)
try:
yield # or yield self.handler?
finally:
logger.setLevel(orig_level)
logger.removeHandler(self.handler)
@fixture
def local_caplog() -> Generator[LogLocalCaptureFixture, None, None]:
yield LogLocalCaptureFixture()
Here is an example that shows how it can be used
def test_app_logger_does_not_propagate2(local_caplog):
logger = logging.getLogger('app')
logger.propagate = False
logger.error('err before caplog ctx manager')
with local_caplog(level=logging.DEBUG, logger=logger):
logger.info('foo')
logger.debug('dbg')
logger.error('err')
assert ([
r.getMessage() for r in local_caplog.handler.records
] == ['foo', 'dbg', 'err'])
Thank you! That works perfectly for me.
I just rearranged the code a bit and added a function which gives me the record_tuples
of the caplog fixture. It is a plain context manager now, not a fixture anymore. Just in case anyone else runs into the same issue I'm leaving my version here as well:
import logging
from contextlib import contextmanager
from typing import List, Tuple
from _pytest.logging import LogCaptureHandler
@contextmanager
def catch_logs(level: int, logger: logging.Logger) -> LogCaptureHandler:
"""Context manager that sets the level for capturing of logs.
After the end of the 'with' statement the level is restored to its original value.
:param level: The level.
:param logger: The logger to update.
"""
handler = LogCaptureHandler()
orig_level = logger.level
logger.setLevel(level)
logger.addHandler(handler)
try:
yield handler
finally:
logger.setLevel(orig_level)
logger.removeHandler(handler)
def records_to_tuples(records: List[logging.LogRecord]) -> List[Tuple[str, int, str]]:
"""A list of a stripped down log records intended for use in assertion comparison.
:param records: A list of LogRecord objects.
:returns: A list of tuples, where each tuple has the format (logger_name, log_level, message)
"""
return [(r.name, r.levelno, r.getMessage()) for r in records]
def test_catch_non_propagated_logs():
logger = logging.getLogger('app')
logger.propagate = False
with catch_logs(level=logging.INFO, logger=logger) as handler:
logger.info('foo')
logger.error('err')
assert records_to_tuples(handler.records) == [
('app', logging.INFO, 'foo'),
('app', logging.ERROR, 'err')
]
Now I just discovered that the documentation of loguru also provides a similar workaround: https://loguru.readthedocs.io/en/stable/resources/migration.html#making-things-work-with-pytest-and-caplog
This issue also prevents report logging. The Captured log call
section was quite useful, but since we use propagate = False
it remains empty. So far I can only workaround by either temporarily setting propagate = True
, or by monkey patching _pytest.logging.catching_logs
so that it solely uses getLogger("app")
and registers a logging.NullHandler at getLogger()
.
How about adding an option
add_option_ini(
"--log-name", dest="log_name", default=None, help="Which logger to attach to for caplog and report logging (defaults to root logger)"
)
to _pytest/logging.py?
You mean a global option for the whole test suite? I wouldn't find that a good pattern. It should be possible to test different loggers in different tests. But I'm not sure if I got you right.
It should be possible to test different loggers in different tests.
My comment was not about testing loggers (what one would do with caplog
, using the caplog_handler under the hood). I'd like to diagnose failed tests using the report_handler, see
pytest captures log messages of level WARNING or above automatically and displays them in their own section for each failed test in the same manner as captured stdout and stderr
in How to manage logging. Both caplog_handler and report_handler are affected by this issue here.
Maybe we need different solutions for different use cases (caplog, live log, report log)?
Or how about automatically finding and attaching to all non-propagating loggers (including the root logger) in catching_logs? Would this have unwanted side effects in your use case?
Would the maintainers consider accepting one of the mentioned approaches here?
I'm working on a Django app, with a Huey worker, and pytest functional tests.
We don't propagate logs to root
unless while testing. The following setup works great for us:
IS_TESTING = 'pytest' in sys.modules
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose_with_color': {
'()': 'colorlog.ColoredFormatter',
'format': '{log_color}{levelname}{reset} {asctime} {cyan}{name}{reset}: {message}',
'style': '{',
},
'simple': {
'format': '{levelname} {name}: {message}',
'style': '{',
},
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose_with_color' if DEBUG else 'simple',
},
'null': {
'class': 'logging.NullHandler',
},
},
'loggers': {
'root': {
'handlers': ['console'],
'level': 'INFO',
},
'django.server': { # Supress django HTTP logging because we do it in a middleware
'handlers': ['console'],
'level': 'WARNING',
'propagate': IS_TESTING,
},
'huey': {
'handlers': ['console'],
'level': 'INFO',
'propagate': IS_TESTING,
},
'myapp': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': IS_TESTING,
},
...
},
}
I'm not sure I'm proud of this trick but it works surprisingly well given that pytest will only capture logs from the root logger.