ml4floods
ml4floods copied to clipboard
Some testing ideas / principles
Ref: PR 53: Pytest to environment.yml
First thing to say is that you probably don't need to write much more code. The notebooks contain the test principles, it's just an automated way for you to run them before you push changes to master etc.
My main motivations for writing tests is that they allows me to catch silent errors, quickly identify what has gone wrong and to jump into arbitrary points in the pipeline by putting in an assert False
statement and using pdb
(might not be best practice but i find it's fast). e.g.
pytest --pdb .
Key things to test:
- Shapes of input and output datasets match expectations
- Missing values (they need to be found and dealt with)
- Value Ranges are within expected bounds (e.g. probability 0-1)
- Toy datasets that are very small (and allow the tests to run quickly) are really important to pass through the pipeline.
- In the past I have written assertions that the model is "learning", i.e. losses fall after 1 or 2 epochs (but i'm not sure this is best practice because of the random potential for this not to be the case with SGD. I mean you expect it but it's not guaranteed)
Things to remember:
- Deterministically seed the random number generator
Nice to haves:
- I think typing is great from a documentation tool, it allows a user to know whether a function is going to have a
Tensor
or aList
or aDict
etc.. Sometimes gettingmypy
to play nicely and not give any errors is a bit of a faff so i would say it is more of a general principle than necessarily having mypy checks all passing. Ultimately however that is what you want.
EDIT removed the link to hypothesis library because by hiding the inputs (they're generated by the library) it's harder to use the tests as a point for developer understanding of what's going on, easier to just generate your own test example
(i did just find the hypothesis library now so may be complete overkill)