beast
beast copied to clipboard
Add code to generate expected test results
@karllark pointed out in #607 that we should have some relatively automated way to generate new test reference files/outputs when we update core code. For instance, when the noise model calculations get updated (#605), that trickles down to several other steps, which then need to be manually regenerated.
I'm not sure if there's a standard way to do this. Perhaps we could add another piece of code, similar to the tests themselves, that can be called as-needed to consistently create the reference files, tables, etc, with some agreed-upon settings.
This is also relevant to discussion in #598.
How about we add the needed code to the appropriate beast-examples subdirectory? I've just created the metal_small example and am setting up the tests to be able to switch between the two (basically subdirs in the web location of the cached files). We will need files for both phat_small and metal_small (nominally) or a way to say don't run a test if it is one or the other. Maybe have the test be skipped if the files don't exist?
That's an interesting idea. Certainly having examples of how to generate all of the plots, etc., would be useful.
It also just occurred to me that we may want to choose a random seed to use everywhere. That way we don't have to keep track of which seeds go with which tests.
Yep.