streamz icon indicating copy to clipboard operation
streamz copied to clipboard

cover tests

Open CJ-Wright opened this issue 5 years ago • 13 comments

CJ-Wright avatar Sep 09 '19 17:09 CJ-Wright

Codecov Report

Merging #273 into master will decrease coverage by 9.28%. The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #273      +/-   ##
==========================================
- Coverage   95.42%   86.14%   -9.29%     
==========================================
  Files          13       22       +9     
  Lines        1661     3781    +2120     
==========================================
+ Hits         1585     3257    +1672     
- Misses         76      524     +448     
Impacted Files Coverage Δ
streamz/dataframe/tests/test_cudf_dataframes.py 2.97% <ø> (ø)
streamz/dataframe/tests/test_dataframe_utils.py 100.00% <ø> (ø)
streamz/utils_test.py 93.75% <0.00%> (-0.44%) :arrow_down:
streamz/sources.py 96.88% <0.00%> (-0.35%) :arrow_down:
streamz/dataframe/core.py 91.60% <0.00%> (-0.01%) :arrow_down:
streamz/dask.py 100.00% <0.00%> (ø)
streamz/tests/test_kafka.py 89.03% <0.00%> (ø)
streamz/dataframe/tests/test_dataframes.py 96.86% <0.00%> (ø)
streamz/tests/test_graph.py 100.00% <0.00%> (ø)
streamz/tests/test_core.py 97.13% <0.00%> (ø)
... and 5 more

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 9c8d3bb...8735ad8. Read the comment docs.

codecov[bot] avatar Sep 09 '19 18:09 codecov[bot]

@martindurant I think it would be good to have coverage of the tests. We can pragma: no_cover things that we won't cover.

CJ-Wright avatar Sep 09 '19 23:09 CJ-Wright

Is it normal to consider tests as part of coverage? We would want to know of tests that didn't run, but it doesn't seem to be a useful metric for the package as a whole. Actually, we have two separate numbers in the report, so probably don't need a limit on both.

I don't really mind though - but had better indeed mark some things we don't use. I don't know why the coverage appears to have gone down significantly.

martindurant avatar Sep 10 '19 11:09 martindurant

I've been trying to exclude the test chunk from the library chunk. I think the coverage is low because we don't run many of the dataframe tests.

The coverage of the tests help make certain that we are running the tests, and show if we miss some chunks of test for some reason or another. (I stole the target from matplotlib)

CJ-Wright avatar Sep 10 '19 12:09 CJ-Wright

Whilst I agree,

  1. So why are we not running those DF tests?
  2. why is the coverage diff negative?

martindurant avatar Sep 10 '19 14:09 martindurant

  1. Many of the tests are cudf tests, which we can't really run on travis. We can leave them in, but we'll need to list them as no cover.
  2. When the not covered tests are averaged in with the covered things it drops the percentage (I'm still working on test exclusion)

CJ-Wright avatar Sep 10 '19 14:09 CJ-Wright

OK, so you can blanket ignore cuDF tests. It would be nice if coverage had a mechanism like pytest marks to set this automatically.

martindurant avatar Sep 10 '19 14:09 martindurant

please fix the merge conflict, if you have the time.

martindurant avatar Jun 24 '20 14:06 martindurant

I guess we can close this now, since we have #pragma no cover and also flaky markers for tests? Or are there places where we can improve?

chinmaychandak avatar Sep 01 '20 23:09 chinmaychandak

Do those cover the purpose of this PR? This PR was supposed to add an additional check to see the coverage of the test code itself. That way we could be certain that the entire test suite was being run (in case we missed anything and pieces of the tests were actually not being run)

CJ-Wright avatar Sep 02 '20 00:09 CJ-Wright

Oh, I understand now. I misunderstood the purpose of this PR then. Apologies!

chinmaychandak avatar Sep 02 '20 00:09 chinmaychandak

No problem, I should really merge master into this

CJ-Wright avatar Sep 02 '20 00:09 CJ-Wright

We no longer have any cuDF tests in streamz, though, so the coverage is back on track. And branches of execution which are not tested because of cuDF/cuStreamz-specific functionality have been marked with #pragma no cover. Maybe that should be enough? How can we check if we still need to cover tests?

chinmaychandak avatar Oct 27 '20 18:10 chinmaychandak