bambi
bambi copied to clipboard
Test model estimates against ground truth
A pretty big hole in our current testing approach is that right now all of our tests are just testing that models compile, run, and return the right variables. We should probably pick out a limited subset of models and test that the model estimates are close to the ground truth (where the "ground truth" comes from either analytical solutions or other packages that converge with one another). This will also be helpful as we add new backends and/or experiment with different ways of parameterizing the same models (cf. #62).