pyo3 icon indicating copy to clipboard operation
pyo3 copied to clipboard

Improving test coverage

Open mejrs opened this issue 3 years ago • 8 comments

This issue gathers some areas where we need to improve our test coverage.

You can view pyo3's test coverage online here or locally by following the steps here (do this when writing tests, so you can check them).

In particular (but not limited to) these areas need improvement:

Please keep in mind to write meaningful tests - do not just write tests that take the happy path, but also tests that take error paths or corner cases.

mejrs avatar Jun 14 '22 18:06 mejrs

See also #2191

davidhewitt avatar Jun 15 '22 20:06 davidhewitt

I finally got coverage working locally and I'm going to chip away at some of these.

aganders3 avatar Jun 18 '22 14:06 aganders3

What do you folks think about coverage for pyo3-ffi? On one hand, it's probably useful to have coverage, on the other, the only stuff which has code in there is copy-paste implementations of macros from CPython and #[derive]-d traits.

davidhewitt avatar Jul 03 '22 18:07 davidhewitt

Doesn't that coverage include whether an external function is called or not? While I am sympathetic to the idea that covering FFI definitions does not gain much (and this is definitely a sore spot for rust-numpy as well), I would say there is still value in having runtime tests for declared foreign API. Especially since that FFI is not just enabling PyO3 itself, but published as a separate crate.

adamreichold avatar Jul 03 '22 18:07 adamreichold

I think from what I've seen on codecov our coverage stats don't say which extern functions are called (but I could be wrong).

davidhewitt avatar Jul 03 '22 21:07 davidhewitt

Another thought - in the future it could be worth adding #[no_coverage] to #[test] functions. IMO whether the test is covered is not that interesting, really we want to measure the library code coverage. We could potentially already do this with https://github.com/taiki-e/coverage-helper if folks here agree with this thought.

davidhewitt avatar Jul 05 '22 06:07 davidhewitt

That sounds reasonable to me. Especially since the always-covered test lines "dilute" the actual coverage, i.e. even with zero percent coverage but three times as much test code as production code, I could end up with (0+3N)(N+3N)=3/4=75% coverage.

adamreichold avatar Jul 05 '22 07:07 adamreichold

My experience is more in Python but most projects I've worked with exclude tests from coverage. I'm not sure if this is just convention or best practice but I'd agree with it for the above reasons.

As for FFI I'm also +1 on having tests. I don't think they'd be too be much effort (I'm willing to help), shouldn't need much maintenance, and they would provide some value. That seems like the best option vs. leaving them uncovered or marking #[no_coverage].

aganders3 avatar Jul 05 '22 11:07 aganders3