streamz
streamz copied to clipboard
cover tests
Codecov Report
Merging #273 into master will decrease coverage by
9.28%
. The diff coverage isn/a
.
@@ Coverage Diff @@
## master #273 +/- ##
==========================================
- Coverage 95.42% 86.14% -9.29%
==========================================
Files 13 22 +9
Lines 1661 3781 +2120
==========================================
+ Hits 1585 3257 +1672
- Misses 76 524 +448
Impacted Files | Coverage Δ | |
---|---|---|
streamz/dataframe/tests/test_cudf_dataframes.py | 2.97% <ø> (ø) |
|
streamz/dataframe/tests/test_dataframe_utils.py | 100.00% <ø> (ø) |
|
streamz/utils_test.py | 93.75% <0.00%> (-0.44%) |
:arrow_down: |
streamz/sources.py | 96.88% <0.00%> (-0.35%) |
:arrow_down: |
streamz/dataframe/core.py | 91.60% <0.00%> (-0.01%) |
:arrow_down: |
streamz/dask.py | 100.00% <0.00%> (ø) |
|
streamz/tests/test_kafka.py | 89.03% <0.00%> (ø) |
|
streamz/dataframe/tests/test_dataframes.py | 96.86% <0.00%> (ø) |
|
streamz/tests/test_graph.py | 100.00% <0.00%> (ø) |
|
streamz/tests/test_core.py | 97.13% <0.00%> (ø) |
|
... and 5 more |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 9c8d3bb...8735ad8. Read the comment docs.
@martindurant I think it would be good to have coverage of the tests. We can pragma: no_cover
things that we won't cover.
Is it normal to consider tests as part of coverage? We would want to know of tests that didn't run, but it doesn't seem to be a useful metric for the package as a whole. Actually, we have two separate numbers in the report, so probably don't need a limit on both.
I don't really mind though - but had better indeed mark some things we don't use. I don't know why the coverage appears to have gone down significantly.
I've been trying to exclude the test chunk from the library chunk. I think the coverage is low because we don't run many of the dataframe tests.
The coverage of the tests help make certain that we are running the tests, and show if we miss some chunks of test for some reason or another. (I stole the target from matplotlib)
Whilst I agree,
- So why are we not running those DF tests?
- why is the coverage diff negative?
- Many of the tests are cudf tests, which we can't really run on travis. We can leave them in, but we'll need to list them as no cover.
- When the not covered tests are averaged in with the covered things it drops the percentage (I'm still working on test exclusion)
OK, so you can blanket ignore cuDF tests. It would be nice if coverage had a mechanism like pytest marks to set this automatically.
please fix the merge conflict, if you have the time.
I guess we can close this now, since we have #pragma no cover
and also flaky
markers for tests? Or are there places where we can improve?
Do those cover the purpose of this PR? This PR was supposed to add an additional check to see the coverage of the test code itself. That way we could be certain that the entire test suite was being run (in case we missed anything and pieces of the tests were actually not being run)
Oh, I understand now. I misunderstood the purpose of this PR then. Apologies!
No problem, I should really merge master into this
We no longer have any cuDF
tests in streamz, though, so the coverage is back on track. And branches of execution which are not tested because of cuDF/cuStreamz-specific functionality have been marked with #pragma no cover
. Maybe that should be enough? How can we check if we still need to cover tests?