Test for catastrophic cancellation
A number of the solutions contain data which results in catastrophic subtraction: two very large numbers where the exponents are identical and the mantissa is the same for all but the last several bits, so the result is a very small number with only a couple bits of precision. The most recent example occurs in solutions/carpooling, where in 2018: helpertables.soln_ref_funits_adopted=3515221717452.7983 helpertables.soln_pds_funits_adopted=3515221717452.8
In Excel the researcher is unlikely to notice these occurrences. When converting to Python we generally notice these because tests/test_excel_integration.py will fail, as pytest.approx() requires <10^6 relative difference. In the carpooling example above, Python produces 0.00146484375 in unitadoption.soln_net_annual_funits_adopted for 2018 while the value in Excel is -0.01416015625. We handle this in tests/test_excel_integration.py by adding masking when comparing these values.
It would be great to know how often and where this happens, as (once we can retire the Excel models) researchers should be made aware that the model introduced a large error bar at a given step of the calculation so that they can re-evaluate choices of data.
Unfortunately I've no idea how to do this in an automated way. We could tolerate even large slowdowns to do this in a test case, run in continuous integration or as a nightly job.