pytest-monitor
pytest-monitor copied to clipboard
Missing metrics for failed tests
Describe the bug
Monitoring results for failed tests are missing from .pymon
.
To Reproduce
- Create some tests:
# test.py
def test_1():
assert 1 == 1
def test_2():
assert 1 == 2
- Run pytest:
pytest test.py
-
.pymon
has only 1 entry fortest_1
inTEST_METRICS
table.
Expected behavior Unless I missed something in the documentation, I expect metrics to be reported for failed tests as well.
Desktop (please complete the following information):
- OS: Linux (Ubuntu 20.04), Windows 10
- Python version: 3.8, 3.11
- Pytest version: 6.x, 7.x
- pytest-monitor version: 1.6.5
Same issue here. In my current setup it would be especially intressting to see metrics in tests that fail.
Hello
Thanks for submitting. Failed tests are filtered out on purpose (primarily designed that way to get consistent metrics). However, the subject has been previously raised in #27 and at that time decision has been taken to postpone it.
I think it may be time for that improvement. Now the question game:
- shall we add test status (failed or succeeded) or not? I personally tend to think that the status is of interest for those willing to filter out failed test.
- shall we add an option to activate that behavior (by extension, the current behavior will be the default one) or should we consider it default?
@js-dieu It's useful because I expect to use this to test for memory leaks. To answer your questions
- Yes, add the status to have parity with the pytest output
- It should be on by default since that's what the user would see in the pytest output. They should have an option to turn off the plugin, though. One row for each test case please.
I'd also like to see the results of failed tests in the database by default.
@js-dieu If you can point me to where changes are needed to accomplish that and if its not too much of implementation effort, my employer allows me to implement this kind of stuff during my work time.
Hi @Veritogen / @Diego-Hernandez-Moodys
Sorry for the long delay. I have reworked the plugin a bit to be able to support this feature without having everything break. I expect to deliver it by end of week. I need to solve a bias in memory snapshot first but I don't expect it to be very long.
@js-dieu awesome, thats great to hear. Let me know if there's something you could need help with. I also made a prove of concept for a DBHandler that can handle postgres for persisting the test results. Should I raise a PR for that?
Is there any progress on that? The feature would be quite handy.
As the discussion froze and @js-dieu seems to be busy, I decided to do my own changes locally. I'm gonna summarize my insights below so in case this feature will be implemented in the future.
- I added a new table column (boolean) "TEST_PASSED" to the METRICS table to reflect if a test passed or not and changed the wrapper functions accordingly
- As the memory_profiler module is not maintained anymore and the company I work is not able to take maintainership, I stripped out and simplified the logic needed for memory measurement and put it in a new module profiler.py inside of this project.
- I changed the functions calls from the plugin to the profiler accordingly and introduced a new attribute "passed" in the pytest node to be able to hand through the information to the database
Of course I would be open to make a PR if you want, but I'm not sure if I did it the way it was intended by you.
For anyone interested in using this feature: As long as it hasn't been merged upstream yet, there is a soft fork available here: https://github.com/einhundert/pytest-monitor/ (We are only providing support for features and changes made by us, don't request anything not related to these)