Coverage at 51%
besselnumba.py and invlapnumba.py seem to be the cause of the low coverage at the moment:
---------- coverage: platform linux, python 3.11.10-final-0 ----------
Name Stmts Miss Cover
------------------------------------------------
ttim/__init__.py 10 0 100%
ttim/aquifer.py 146 26 82%
ttim/aquifer_parameters.py 99 6 94%
ttim/besselnumba.py 1035 898 13%
ttim/circareasink.py 98 7 93%
ttim/element.py 153 64 58%
ttim/equation.py 189 71 62%
ttim/fit.py 139 16 88%
ttim/invlapnumba.py 143 132 8%
ttim/linedoublet.py 153 9 94%
ttim/linesink.py 322 127 61%
ttim/model.py 339 103 70%
ttim/trace.py 180 83 54%
ttim/util.py 71 8 89%
ttim/version.py 1 0 100%
ttim/well.py 150 21 86%
------------------------------------------------
TOTAL 3228 1571 51%
There is a test_bessel.py file in ttim/src/ but it compares fortran compiled bessel functions to their numba counterparts. Adapting this file to test the numba funcs against stored results would probably significantly improve coverage, and actually test the besselnumba code.
But all notebooks (all TTim models) call both the besselnumba and invlapnumba routines. Wouldn't that be part of testing?
If these contain numba functions it could be that coverage is not recognizing them because they are compiled outside of python scope. I think a possible solution would be to use add the NUMBA_DISABLE_JIT=1 environment variable, but it might slow things down a bit.
If these contain numba functions it could be that coverage is not recognizing them because they are compiled outside of python scope. I think a possible solution would be to use add the NUMBA_DISABLE_JIT=1 environment variable, but it might slow things down a bit.
Indeed.
For a variety of packages I test twice, once for coverage with NUMBA_DISABLE_JIT=1, then another time to see whether everything works correctly with Numba enabled.
E.g. this project is 90% numba: https://github.com/Deltares/numba_celltree/blob/6092c456587c8f4ade1d3b19e87bc7278b3d379d/pyproject.toml#L85 But has 99% coverage.
Presumably the test cases are not too large, such that the slowdown from dynamic Python isn't too bad.
And otherwise, it could be worthwhile to split into (small) unit tests for coverage and bigger slow integration tests without coverage.
Hi, I have just modified the development branch with NUMBA_DISABLE_JIT=1.
- tests take 7 min in total in the CI run
- coverage increases to 67%
How do you assess that @dbrakenhoff ?
That seems more reasonable in terms of coverage percentage, and not too bad in terms of total run time. I think in the end I'll opt to adapt my fortran vs numba test code that I used to check the numba implementation, and then turn off numba jit for those tests specifically, but until then this seems like a good solution. Thanks for trying it out and reporting the results!
Fixed in #77.