pytest
pytest copied to clipboard
pytest capture logging error still happening
Good morning,
I am seeing the same issue as described in https://github.com/pytest-dev/pytest/issues/14 but using a much more modern combination of both (anaconda) Python and pytest.
Test session starts (platform: linux, Python 3.6.8, pytest 4.6.3, pytest-sugar 0.9.2)
rootdir: /ml/tests/ml/services, inifile: all-tests.ini
plugins: forked-1.0.2, xdist-1.29.0, sugar-0.9.2, cov-2.7.1, mock-1.10.4
In the relevant .ini file I have this:
[pytest]
testpaths = tests
addopts =
-n 4
--durations=20
--disable-warnings
where -n 4 is to use 4 parallel test runners with pytest-xdist. Edit: I was able to isolate the behavior to when using parallel worker with xdist, so likely it is an issue with an xdist worker prematurely closing a logger stream.
Basically, when I run one particular test file, I see a large number of repeated error messages in the pytest output:
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.6/logging/__init__.py", line 996, in emit
stream.write(msg)
File "/usr/local/lib/python3.6/dist-packages/_pytest/capture.py", line 441, in write
self.buffer.write(obj)
ValueError: I/O operation on closed file
Call stack:
File "/usr/lib/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/ml/ml/models/neighbors.py", line 289, in re_index
success = self._re_index()
File "/ml/ml/models/neighbors.py", line 349, in _re_index
logger.info('== Fit new model with {} ids and a dimensionality of {} =='.format(n, dim))
Message: '== Fit new model with 48 ids and a dimensionality of 2 =='
Arguments: ()
The issue appears to be related to pytest prematurely closing the stream associated with a given logger for the underlying code module in question. I can't really post all the code since it is an extended example from my work, but I can confirm there is nothing exotic or unusual happening with this logger. The module being tested just uses the normal convention to define
logger = logging.getLogger(__name__)
and there are no duplicate loggers or conflicts with this logger's name. The logger itself is not defined in any multiprocessing context or anything like that. Just a boring import of a top-level module from our code base.
But during the test execution, something weird is happening with pytest such that the eventual calls into that logger produce these errors.
If I turn off pytest capturing with --capture=no, then the messages go away, but unfortunately so does a bunch of other necessary output that I want pytest to capture and display.
How can I debug this further? I'm sorry that I cannot provide a minimal working example, but I can definitely confirm that there is nothing weird going on with these tests. It's a very straightforward use of logging and a very straightforward test file with simple imports and function calls.
- [x ] a detailed description of the bug or suggestion
- [ x] output of
pip listfrom the virtual environment you are using - [ x] pytest and operating system versions
- [ x] minimal example if possible
A reproducing case, which doesn't need xdist:
import atexit
import logging
LOGGER = logging.getLogger(__name__)
def test_1():
pass
atexit.register(lambda: LOGGER.error("test in atexit"))
logging.basicConfig()
I'm having the same issue. Mine are debug logs during atexit.
Same happening here in Python 3.7 and latest pytest
@Zac-HD Thanks for migrating that example from the other issue. In my case with my original bug, there was simply no way to reproduce an example, since it contained so many moving parts related to pytest and pytest-xdist and a large test suite in the project. In general it is quite complex to try to reduce pytest errors to small & reproducible errors, though I agree that's the ideal thing to strive for. Unfortunately most of the time the best effort someone can really do is to paste in the error message and give general context around the use case where the error appears, and hope pytest devs who know much more about the internals can take it from there to scope it down to the reproducible essence of the bug. Just wanted to say it is very appreciated!
Hi everyone,
What I believe is happening is:
- pytest changes
sys.stdoutandsys.stderrto a buffer while importing test modules. - If there's user code setting up logging and/or creating a
logging.StreamHandlerat the import level, it will attach itself to pytest's buffer. - When pytest is about to finish the test session, it will restore
sys.stdoutandsys.stderrto the original values, and close the "capture" buffer. - Here the problem happens: if any message is emitted at this point, the
StreamHandlerwill try to attach itself to the buffer, hence the error.
Unfortunately I don't see how pytest can work around this, because we can't know who kept a reference to the capture buffer and somehow tell the owner of the reference that we are now "dead".
@spearsem, are you calling basicConfig at import time in your application, or setting up logging yourself at import time?
If I'm getting this right, my suggestion is to avoid setting up your logging configuration at import time, moving it to a function which is called only when running your actual production code.
@nicoddemus the application we are testing unfortunately requires setting up logging at import time. It is part of a Flask application that has an in-house framework for configuring logging so that it is standardized across all microservices (for compliance). I think this is actually a very commonly needed use case for automating logging in microservices. Some of our pytest code is using Flask testing clients to test these services, and the construction of the services will always produce this problem (unless we change the whole framework).
I can also say this was not always happening with pytest, it appeared in some old bug reports but then went away for a while and came back with recent versions. How did pytest handle this differently inbetween (or am I mistaken about this and it was always present)? Particularly in item 3. from your comment, what prevents pytest from being able to wait definitively for the testing code to complete and be torn down entirely before switching stdout and stderr back?
You mention,
Unfortunately I don't see how pytest can work around this, because we can't know who kept a reference to the capture buffer and somehow tell the owner of the reference that we are now "dead".
but I don't understand. Shouldn't pytest exactly be able to keep track of this, or at least wait until all test operations are completed so that any pytest unit of execution possibly containing an object that requested access to the buffers has fully completed before pytest makes the "undo" switch back?
Maybe related to https://github.com/pytest-dev/pytest/pull/4943 and/or https://github.com/pytest-dev/pytest/pull/4988.
Shouldn't pytest exactly be able to keep track of this, or at least wait until all test operations are completed so that any pytest unit of execution possibly containing an object that requested access to the buffers has fully completed before pytest makes the "undo" switch back?
Yes, that would be good. But likely we would need to wrap around atexit then - are you using that also?
I guess what should be done here is duplicate/redirect the handlers, instead of closing them.
https://github.com/pytest-dev/pytest/pull/6034 should fix this, if you want to give it a try.
@blueyed I'm not a python expert, but your fix seems to me like a workaround rather than a fix, I'm wondering why we cannot do it the atexit way? It seems to work fine for me (https://github.com/wanam/pytest/commit/d0d1274486e57196e5e1cc1f0ace67ff6b48e641)
i'd rather switch thos to a exception that clearly tells to use a better logging setup and a documented best practice for setup of logging
from my pov its important to just say NO to tactically broken but convenient setups that would make the internals a more painful mess
if you set logging in import, you deal with the fallout ! pytest shouldn't suffer excess complexity to supplement for what basically amounts to broken/discouraged setups that don't play well with the rest of the world
i'd rather switch thos to a exception that clearly tells to use a better logging setup and a documented best practice for setup of logging
from my pov its important to just say NO to tactically broken but convenient setups that would make the internals a more painful mess
I understand the motivation for this, but it would be a very unfortunate decision e.g. for people using ROS (robot operation system). I ran into https://github.com/pytest-dev/pytest/issues/5577 specifically in this context. The problem is that the "tactically broken" logging setup can come out of ROS itself, and -- what makes this an impossible problem to work-around as a user -- the ROS Python side is tightly coupled to its native distribution (yes, it is quite ugly...). As a result, "fixing" these issues is not a very realistic option, because one would have to maintain their own ROS distribution fork :(.
That is just one example. It would be nice if we were able to use py.test even if third party libraries are guilty of logging setup at import -- in particular if these libraries are not pip-installable and thus hard to fix. Even worse: Third party library authors may even prefer to stick to the anti-pattern for personal reasons (I recently had such a discussion about a related on-import anti-pattern).
I'll have to give #6034 a try.
whats preventing issuing a actual issue against ros?
i an case its safe to say that hiding this issue will just make things hideeous and strangely, i would strongly suggest to disable capture on broken platforms and reporting issues against the platform instead of putting resource leaks into pytest to "work" on broken platforms
your message can very easily be read as "our vendor wont fix things anyway, so lets make pytest worse instead"
and i reject that notion, please get at your vendor and have them fix stuff,
whats preventing issuing a actual issue against ros?
Nothing, but much like an operating system, ROS comes in distributions with slow release cycles. Usually robotic developers stay with one ROS distribution for years, because it is neither straightforward to migrate from one release to another nor to maintain an own distribution containing backports. In other words: Even if the upstream issues gets fixed, we wouldn't be able to use it any time soon because we have to target existing releases.
Keep in mind that in C/C++ libraries like ROS the Python bindings play a inferior role. The maintainers are not Python developers, and often Python best practices don't play a big role or are even ignored deliberately to optimize for special use cases. It might be not entirely straightforward to fix this issue in ROS in general.
your message can very easily be read as "our vendor wont fix things anyway, so lets make pytest worse instead"
I've indeed experienced such unwillingness to adhere to Python best practices already. In any case, to me pytest would be "better" if it less restrictive on the code it supports. Even for such "broken" libraries, py.test is a great tool to write unit tests ;).
@bluenote10 in that case i advise deactivating capture as a starting point and i suspect there can be a pytest plugin, that hijacks the logging system on rhos for example, and then enables a pytest compatible setup/configruation
that way core pytest doesn't have to care about a broken system, and a pytest plugin can fix/patch the mess ups of the vendor,
in an case this should and must be a issue against ros - retiring a shoddy workaround after a decade is less horrible than leaving it be because nobody did fix the origin
given that others mess up with logging as well, this perhaps could be pytest-gumby-logging which would deal with logging in the fashion of the gumbys (see the monty python sketches)
@wanam #6034 uses atexit - similar to your patch. But with yours it might run out of open file descriptors, e.g. with pytest's own test suite.
from my pov its important to just say NO to tactically broken but convenient setups that would make the internals a more painful mess
I think it suggests the internals are in a messy state if a certain aspect of basing end-to-end usage isn't well supported by existing internal primitives. If this use case is somehow weird or special from point of view of what assumption pytest is making, that strikes me as a criticism of those assumptions, not a criticism of the use case.
if you set logging in import, you deal with the fallout ! pytest shouldn't suffer excess complexity to supplement for what basically amounts to broken/discouraged setups that don't play well with the rest of the world
Setting logging in import is such a ubiquitous thing that I just can't agree with this. It's like saying, "if you define your own functions, then you deal with the following mess". These are not at all broken / discouraged setups, but are straightforward basic workflows for tons of use cases that need pytest to support it.
I understand it's a difference of opinion, but it feels like it's straying very far into a philosophical contortion to justify not doing something that clearly is an end to end use case users need supported. I really don't think, "don't do it that way," is a reasonable answer to this.
@spearsem the technical best practice is - libraries don't set up logging on import, applications set up logging on startup
and that prevents a number of issues - like triggering configuration of logging in a situation where the system state is unexpected
so as far as i'm concerned, this is about practical missuse of the stdlib logging module
the stdlib itself even suggests to not to trigger this at import, but rather at program startup
so for me this is not a topic about far off philosophy, this is about ignoring the best practices that are in place to prevent the very fallout that's now being complained about
Duplicate of https://github.com/pytest-dev/pytest/issues/5282.
After reading this thread and considering this some more, I think the real issue is that pytest is not playing well with any system which hijacks sys.stdout or sys.stderr, and I can see this happening for other cases other than logging (something like hijacking sys.stdout to write to both stdout and to a file).
Given that this does not affect logging only, might make sense for pytest to make an effort here if feasible, after all pytest itself is hijacking stdout and stderr, so it might as well try to play nice with others when possible.
#6034 by @blueyed is an attempt at this.
@nicoddemus Yep. But isn't it still a duplicate?
Whatever your philosophy about how logging should be set up, I would hope that the underlying philosophy of a testing framework should be that it should not change the underlying semantics of the runtime system.
If pytest breaks your code when doing something that works perfectly fine otherwise -- recommended or not -- then you are putting a heavy burden on the user when they encounter a problem like this.
Practically speaking even after reading all of the above I still don't understand how to fix my broken tests.
I "fixed" my tests by explicitly changing logging to zero during at exit. If something goes wrong, it will except out and I can manually add print and other statements and rerun the test without pytest there so it doesn't break.
On Sat, Jun 20, 2020, 2:03 PM Christopher Barber [email protected] wrote:
Whatever your philosophy about how logging should be set up, I would hope that the underlying philosophy of a testing framework should be that it should not change the underlying semantics of the runtime system.
If pytest breaks your code when doing something that works perfectly fine otherwise -- recommended or not -- then you are putting a heavy burden on the user when they encounter a problem like this.
Practically speaking even after reading all of the above I still don't understand how to fix my broken tests.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pytest-dev/pytest/issues/5502#issuecomment-647027824, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB2HPYAKI3B2M6SOOEVL43LRXT2YXANCNFSM4H34OUIQ .
Clearing all the log handlers during at test teardown works for me.
I invoke the following function in my shared test fixture to remove all logging handlers. In my case, I only need to reset the root logger, but depending on your application, you may need to do all of them.
def clear_loggers():
"""Remove handlers from all loggers"""
import logging
loggers = [logging.getLogger()] + list(logging.Logger.manager.loggerDict.values())
for logger in loggers:
handlers = getattr(logger, 'handlers', [])
for handler in handlers:
logger.removeHandler(handler)
Our codebase tampers with sys.stdout and sys.stderr, so the various handler workarounds did not work for me. I ended up doing this:
@pytest.fixture(autouse=True)
def capture_wrap():
sys.stderr.close = lambda *args: None
sys.stdout.close = lambda *args: None
yield
Not a great general-purpose workaround, but posting it here in case someone else finds the approach useful.
@analog-cbarber's fixed worked for me within a pytest_sessionfinish hook within conftest.py to cleanup handlers in multi-threaded libraries.
A quick fix for me was to turn off capturing following https://www.stefaanlippens.net/pytest-disable-log-capturing.html
I added the following to my pytest invocation --capture=no --log-cli-level=INFO though I'm still a little miffed at an external library I am using that dumps info via logging and print invocations, making life way noisier.
I just ran into this too. In our case, setting up logger is done in a function, not at module level. But we run into this because we have a few tests for the function which sets up the logger and check that the logger is outputting to the right place using capsys (The combination of these two is what triggers the error). Those tests didn't think of the fact that setting up the logger would have a global side effect of adding handlers to the global logger. I've worked around this by creating https://github.com/pytest-dev/pytest/issues/5502#issuecomment-647157873 as a fixture which I added to those test functions.
I feel like this deserves to be documented somewhere but I'm not sure where. The messages are going to be emitted far away from where the actual issue lies.... Maybe as a warning in the capture howto? https://docs.pytest.org/en/7.1.x/how-to/capture-stdout-stderr.html
Clearing all the log handlers during at test teardown works for me.
I invoke the following function in my shared test fixture to remove all logging handlers. In my case, I only need to reset the root logger, but depending on your application, you may need to do all of them.
def clear_loggers(): """Remove handlers from all loggers""" import logging loggers = [logging.getLogger()] + list(logging.Logger.manager.loggerDict.values()) for logger in loggers: handlers = getattr(logger, 'handlers', []) for handler in handlers: logger.removeHandler(handler)
@analog-cbarber, by calling the removeHandler() you effectively mutate the list you're iterating on (returned by getattr(logger, "handlers", [])). The end result is that not all handlers are removed from the logger. What fixes it is making a shallow copy of the handlers list, for example by using a slice notation (inspiration: https://youtu.be/xeLecww65Zg?t=805).
This works well then:
def clear_loggers():
"""Remove handlers from all loggers"""
import logging
loggers = [logging.getLogger()] + list(logging.Logger.manager.loggerDict.values())
for logger in loggers:
if not hasattr(logger, "handlers"):
continue
for handler in logger.handlers[:]:
logger.removeHandler(handler)
By the way, the credit goes to this person: https://stackoverflow.com/a/7484605/3499937.
Pytest probably needs to warn every time a log handler does the "crime" of setting up stdio handlers in test setup