coveragepy
coveragepy copied to clipboard
per file coverage threshold
use cases:
- fail on unexecuted files
- starting to do unit tests in a big project, overall coverage is low - but we can increase per file coverage more easily
Can you say more about what you want this to do? What would the user experience be?
--fail-under-file 0 would return exit code 2 for any unexecuted files
--fail-under-file 10 would return exit code 2 for any files with individual coverage less than 10%
Thanks, that makes it clear.
I'm not sure how this would help for your second case ("starting to do unit tests in a big project"): coverage would fail for a very long time, until you managed to get at least 10% (or whatever) coverage in every single file. That seems like it would be discouraging, and push you toward the wrong metric.
Is there any ongoing action to implement --fail-under-file 10 ? I guess this would be a big benefit for the most projects. Because with that feature you can check which developer doesn't did his test homework.
Let's say I have 10 files, of which 9 of them are 100%, and one of them is 0%, and I set my limit to 90% coverage. Currently, it will pass because it's taking the average, but I don't want it to pass. That one file has below my coverage percentage of 90% so it should fail.
When can we expect this feature?
See also #717, which is similar.
One option while waiting for coverage.py to add this as a feature: implement it as a separate tool. You can get a JSON report from coverage.py, and then check the totals for each file. This would be a way to experiment with different styles of rules also ("tests/" must have 100%, "project/" must have 90%, or whatever).
I've written a proof-of-concept using the JSON report: https://github.com/nedbat/coveragepy/blob/master/lab/goals.py
Try it and let me know what you think.
... and a blog post about it: https://nedbatchelder.com/blog/202111/coverage_goals.html
Looking forward to this feature!
@RodriguezLucha you can get it now: https://nedbatchelder.com/blog/202111/coverage_goals.html Or is there a reason that isn't sufficient?
@netbat I've ended up reimplementing different adhoc variants of this feature over the years and personally I think it would make a lot of since to include this inside coverage.py itself, to reduce number of dependencies and have a standardize way of doing it. I'm willing to help out with implementing & documenting this feature if you agree that it should go into coverage.py itself.
@RodriguezLucha you can get it now: https://nedbatchelder.com/blog/202111/coverage_goals.html Or is there a reason that isn't sufficient?
It's easy to convince a team to just introduce a new configuration than a new file. 🤷♂️
I came across this issue 3 times already because I wanted to suggest it on different projects.
But well... The script should be enough. I would not suggest it tho, because the weight of having that file does not overcome the need for this functionality on a project. But... If it was in the coverage itself, it's just a line of configuration.
Anyway, I fully understand you. But if the feature was available on coverage, I'd probably use on every project that doesn't have 100% coverage already.
This was super helpful to enforce full coverage on our test files (and uncovered so broken tests in the process).
Maybe as a middle ground, you could add this as a separate console_script in the coverage library, without actually making it to the coverage command?
One piece of feedback (which I can open a PR for if you want) would be to use logging.error on lines like
https://github.com/nedbat/coveragepy/blob/3fac1386203b0ac74d028321759f03d97a2b053d/lab/goals.py#L78
so that they show up better in some CI systems (Bamboo was initially hiding this in one of the output panes).
This feature would be quite interesting to have in this tool by default. I very often realise I missed testing a whole file because I imported a function from another file (bad copy-paste obviously), and only when looking at the details of all the files and seeing a 0% do I know I made a mistake. Since there are many files the total coverage is above 90%, but having this option would detect this mistake easily.
There are plenty of other tools in other languages that provide this by default, so why not here?
why not here?
The usual tradeoff of having to support code, and wondering how much use it would get. I suppose it wouldn't be much work to add a new command coverage goal that had a similar command line to the goals.py program from my blog post. I'm just not sure how many people would find that useful.
Do you have a suggestion on how to estimate that?
I'd use it for uvicorn. 😬👍
Do you have a suggestion on how to estimate that?
The best we can do is gauge from comments on issues, and guess.
If you can be more objective about what is needed to take a decision here, I can try to help... 👀
Thanks for the offer, but there is nothing more objective. We don't have a way to poll the users of coverage.py.
I'm not sure if this is the same: How can I enforce 100% line coverage for test files in Python?
I'll be interested to have this feature as well !