coveragepy icon indicating copy to clipboard operation
coveragepy copied to clipboard

per file coverage threshold

Open graingert opened this issue 7 years ago • 31 comments

use cases:

  • fail on unexecuted files
  • starting to do unit tests in a big project, overall coverage is low - but we can increase per file coverage more easily

graingert avatar Aug 08 '18 15:08 graingert

Can you say more about what you want this to do? What would the user experience be?

nedbat avatar Aug 08 '18 20:08 nedbat

--fail-under-file 0 would return exit code 2 for any unexecuted files

graingert avatar Aug 09 '18 07:08 graingert

--fail-under-file 10 would return exit code 2 for any files with individual coverage less than 10%

graingert avatar Aug 09 '18 07:08 graingert

Thanks, that makes it clear.

I'm not sure how this would help for your second case ("starting to do unit tests in a big project"): coverage would fail for a very long time, until you managed to get at least 10% (or whatever) coverage in every single file. That seems like it would be discouraging, and push you toward the wrong metric.

nedbat avatar Aug 09 '18 10:08 nedbat

Is there any ongoing action to implement --fail-under-file 10 ? I guess this would be a big benefit for the most projects. Because with that feature you can check which developer doesn't did his test homework.

cbernecker avatar Jun 17 '19 13:06 cbernecker

Let's say I have 10 files, of which 9 of them are 100%, and one of them is 0%, and I set my limit to 90% coverage. Currently, it will pass because it's taking the average, but I don't want it to pass. That one file has below my coverage percentage of 90% so it should fail.

When can we expect this feature?

wshaikh avatar Oct 15 '21 17:10 wshaikh

See also #717, which is similar.

nedbat avatar Oct 16 '21 20:10 nedbat

One option while waiting for coverage.py to add this as a feature: implement it as a separate tool. You can get a JSON report from coverage.py, and then check the totals for each file. This would be a way to experiment with different styles of rules also ("tests/" must have 100%, "project/" must have 90%, or whatever).

nedbat avatar Oct 17 '21 17:10 nedbat

I've written a proof-of-concept using the JSON report: https://github.com/nedbat/coveragepy/blob/master/lab/goals.py

Try it and let me know what you think.

nedbat avatar Oct 31 '21 20:10 nedbat

... and a blog post about it: https://nedbatchelder.com/blog/202111/coverage_goals.html

nedbat avatar Nov 02 '21 00:11 nedbat

Looking forward to this feature!

RodriguezLucha avatar Jan 18 '22 17:01 RodriguezLucha

@RodriguezLucha you can get it now: https://nedbatchelder.com/blog/202111/coverage_goals.html Or is there a reason that isn't sufficient?

nedbat avatar Jan 18 '22 18:01 nedbat

@netbat I've ended up reimplementing different adhoc variants of this feature over the years and personally I think it would make a lot of since to include this inside coverage.py itself, to reduce number of dependencies and have a standardize way of doing it. I'm willing to help out with implementing & documenting this feature if you agree that it should go into coverage.py itself.

jdahlin avatar Aug 10 '22 14:08 jdahlin

@RodriguezLucha you can get it now: https://nedbatchelder.com/blog/202111/coverage_goals.html Or is there a reason that isn't sufficient?

It's easy to convince a team to just introduce a new configuration than a new file. 🤷‍♂️

I came across this issue 3 times already because I wanted to suggest it on different projects.

But well... The script should be enough. I would not suggest it tho, because the weight of having that file does not overcome the need for this functionality on a project. But... If it was in the coverage itself, it's just a line of configuration.

Anyway, I fully understand you. But if the feature was available on coverage, I'd probably use on every project that doesn't have 100% coverage already.

Kludex avatar Aug 21 '22 13:08 Kludex

This was super helpful to enforce full coverage on our test files (and uncovered so broken tests in the process).

Maybe as a middle ground, you could add this as a separate console_script in the coverage library, without actually making it to the coverage command?

One piece of feedback (which I can open a PR for if you want) would be to use logging.error on lines like https://github.com/nedbat/coveragepy/blob/3fac1386203b0ac74d028321759f03d97a2b053d/lab/goals.py#L78 so that they show up better in some CI systems (Bamboo was initially hiding this in one of the output panes).

chriselion avatar Aug 23 '22 21:08 chriselion

This feature would be quite interesting to have in this tool by default. I very often realise I missed testing a whole file because I imported a function from another file (bad copy-paste obviously), and only when looking at the details of all the files and seeing a 0% do I know I made a mistake. Since there are many files the total coverage is above 90%, but having this option would detect this mistake easily.

There are plenty of other tools in other languages that provide this by default, so why not here?

mebibou avatar Jul 11 '23 07:07 mebibou

why not here?

The usual tradeoff of having to support code, and wondering how much use it would get. I suppose it wouldn't be much work to add a new command coverage goal that had a similar command line to the goals.py program from my blog post. I'm just not sure how many people would find that useful.

nedbat avatar Jul 16 '23 19:07 nedbat

Do you have a suggestion on how to estimate that?

I'd use it for uvicorn. 😬👍

Kludex avatar Jul 16 '23 20:07 Kludex

Do you have a suggestion on how to estimate that?

The best we can do is gauge from comments on issues, and guess.

nedbat avatar Jul 16 '23 20:07 nedbat

If you can be more objective about what is needed to take a decision here, I can try to help... 👀

Kludex avatar Jul 16 '23 21:07 Kludex

Thanks for the offer, but there is nothing more objective. We don't have a way to poll the users of coverage.py.

nedbat avatar Jul 16 '23 21:07 nedbat

I'm not sure if this is the same: How can I enforce 100% line coverage for test files in Python?

martin-thoma avatar Jan 31 '24 19:01 martin-thoma

I'll be interested to have this feature as well !

florian-guily avatar Feb 08 '24 17:02 florian-guily