fortpy
fortpy copied to clipboard
Automate Unit-test-driven Development
Although everything is already automated quite well, the developer still has to generate model input files at the very least in order to get the tests setup. For most scientific applications, the most fundamental inputs will be well-constrained (e.g. a grid in real space, positive integers, etc.). Imagine the following:
- run
testgen.pyon a brand new module that has its very first, fundamental subroutine. - script runs presents subroutines in an intuitive order to the user and, for each one, asks what data to use for the input parameters.
- the input data can come from 1) files that already exist; 2) randomly/predictively generated data within specified physical constraints; 3) output of a previous unit test in the current module or another one in the library.
- the script asks which of the output variables to check.
Once all the parameters have been checked, the script generates the XML file to automate the unit test, runs it and then presents the user with the output to confirm if it is correct. If it is, the output is saved as "model" output for that parameter and becomes available as input for another unit test. The interface would make sure that duplication is kept to a minimum as tests are chained to each other.
The only difficulty is that for large modules with lots of subroutines, we need to decide which executables to present first so that their output can be used as input for the others in the module. We could use a graph theory algorithm to choose those executables with the least number of connections to present first.