beast icon indicating copy to clipboard operation
beast copied to clipboard

Add code to generate expected test results

Open lea-hagen opened this issue 4 years ago • 5 comments

@karllark pointed out in #607 that we should have some relatively automated way to generate new test reference files/outputs when we update core code. For instance, when the noise model calculations get updated (#605), that trickles down to several other steps, which then need to be manually regenerated.

I'm not sure if there's a standard way to do this. Perhaps we could add another piece of code, similar to the tests themselves, that can be called as-needed to consistently create the reference files, tables, etc, with some agreed-upon settings.

lea-hagen avatar Jul 28 '20 16:07 lea-hagen

This is also relevant to discussion in #598.

lea-hagen avatar Jul 28 '20 16:07 lea-hagen

How about we add the needed code to the appropriate beast-examples subdirectory? I've just created the metal_small example and am setting up the tests to be able to switch between the two (basically subdirs in the web location of the cached files). We will need files for both phat_small and metal_small (nominally) or a way to say don't run a test if it is one or the other. Maybe have the test be skipped if the files don't exist?

karllark avatar Sep 03 '20 16:09 karllark

That's an interesting idea. Certainly having examples of how to generate all of the plots, etc., would be useful.

lea-hagen avatar Sep 08 '20 19:09 lea-hagen

It also just occurred to me that we may want to choose a random seed to use everywhere. That way we don't have to keep track of which seeds go with which tests.

lea-hagen avatar Sep 08 '20 19:09 lea-hagen

Yep.

karllark avatar Sep 08 '20 19:09 karllark