coveragepy
coveragepy copied to clipboard
Don't measure coverage from certain lines
Originally reported by pckroon (Bitbucket: pckroon, GitHub: pckroon)
Hi all,
first off, thanks for the great work you've been doing so far with this :)
This is a duplicate of the issue I filed on pytest-cov earlier (https://github.com/pytest-dev/pytest-cov/issues/207). I have a project that parses data files on import, which means that the parser code is always considered as covered, even though it is not actually tested. Is there any way to either record a "background" which is later substracted, or a way to specifically exclude coverage coming from certain lines (i.e. the import statements)?
Minimal example:
project
|-setup.py
|-project
||-data.py
||-func.py
||-__init__.py
|-tests
||-test_x.py
||-test_data.py
project/init.py:
from .func import f
project/data.py:
def parse_datafile():
return 8
a = parse_datafile()
project/func.py:
def f(x): return x**2
tests/test_x.py
from project.data import a # pragma: no cover
from project import f
def test_f():
assert f(a) == a**2
tests/test_data.py:
from project.data import parse_datafile # pragma: no cover
def test_parse():
assert parse_datafile() == 8
> coverage erase && pytest --cov=project tests/test_x.py
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-3.6.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/peterkroon/python/coverage_meuk, inifile:
plugins: cov-2.5.1
collected 1 item
tests/test_x.py . [100%]
----------- coverage: platform linux, python 3.5.2-final-0 -----------
Name Stmts Miss Cover
-----------------------------------------
project/__init__.py 1 0 100%
project/data.py 3 0 100%
project/func.py 2 0 100%
-----------------------------------------
TOTAL 6 0 100%
I expect project/data.py to be 0% covered, unless I also run test_data.py. I can't make that work with a .coveragerc or pragmas. Any advice is highly welcome :)
- Bitbucket: https://bitbucket.org/ned/coveragepy/issue/668
Original comment by pckroon (Bitbucket: pckroon, GitHub: pckroon)
I think (from coverage's perspective) I can make a file like such:
background.py
import project.data
and then do
coverage run --background=background.py
where first background.py is ran and recorded, and the lines covered there are substracted from the actual coverage results from the tests. Pytest-cov could wrap this by making the background equivalent to test discovery.
How do you envision running your tests so that you can run test_data and get the lines counted, and also run other tests and not get the lines counted?
Original comment by pckroon (Bitbucket: pckroon, GitHub: pckroon)
Thanks for the lightning fast reply. And indeed, running test_data should count is as covered, but currently running test_x also covers it.
What I want to use it for is to make sure the parser code is also actually tested and produces the expected output, so it's mostly about the percentages.
Hmm, this seems like the opposite of the nocover pragma: a line that is executed, but you don't want counted as covered. There is no way to do that now. I'm not sure how you would use it in your scenario, since running test_data.py should count it as covered.
Perhaps if #170 (who tests what) is ever implemented, it will give you the information you need?
Can you say more how you would use this in a real project? Are you trying to make the total percentage more accurate, or are you trying to make the red/green line markers more accurate? Or something else?
@pckroon The new context feature in 5.0a3 might be usable for this: https://nedbatchelder.com/blog/201810/who_tests_what_is_here.html
Cheers. I glossed over it and it looks good.
As it is now I see two ways of implementing what I need:
- Coverage keeps track of how often a line is hit. You/I record this for the testsuite, and for a file that just does the imports, and subtract the coverages.
- Trash all coverage that did not come from a test function. That way coverage that came from imports is not reported.
With the new feature, option 2 seems easier to implement. What is your view on this?
Definitely option 2 could be done now. If you use 5.0a3, you can delete the recorded data for the empty context, and then report on what is left.
Then from a practical point of view: do I try to convince someone from pytest-cov to implement this, or write my own plugin for either coverage.py, pytest-cov or pytest.
I guess for short term making my own plugin for coverage would be the quickest test/implementation. For long term adaptation I should make/revive an issue on pytest-cov. I'll have a look at that soon.
What's the status here ?
There's been no progress on this. Do you have a new scenario that could help us find a solution?
With static contexts, you could run the test suite once with no tests and --context=background
. Then run the tests with --context=tests
. Then there are two options:
- Run a separate program to delete the "background" context data from the SQLite data file, and report as usual.
- Add an option to the coverage reporting commands to report on only certain contexts (or to exclude certain contexts).
Run a separate program to delete the "background" context data from the SQLite data file, and report as usual.
In commit 6a1c275b94818ccef91481ab01d0eb000906967a I threw together a quick program to do this: select_contexts.py.
I'm not sure it does what we need yet. Maybe one of these two scenarios is what you want... Try it out and let me know:
Excluding code outside of any test
In your .coveragerc file, add:
[run]
dynamic_context = test_function
This will record which test function was running for each data point. Run your test suite as usual. The code running outside the test functions will have an empty context recorded.
Use select_context.py to subset the data file, then report the resulting data:
% python select_context.py --exclude='^$' # regex to exclude empty string
% coverage html --data-file=output.data
Excluding code run when no tests are run
% coverage run --parallel --context=background ... somehow run your test suite with no tests ...
% coverage run --parallel --context=tests ... run your test suite as usual ...
% coverage combine
% python select_context.py --exclude=background
% coverage html --data-file=output.data
@nedbat Your Script helped me out really well. Would be nice to be able to ignore those lines found all together and mark them as "not executable". In my Django Project I really want to ignore those unnecessary loading/declaration statements. Though I saw this data is not in the objects that are currently traversed by your script.