optimistix icon indicating copy to clipboard operation
optimistix copied to clipboard

Proposed structure for integration of benchmarks with pytest-benchmark

Open johannahaffner opened this issue 8 months ago • 0 comments

This PR is a proposal for what integration of benchmarks could look like.

I suggest:

  • keeping a separate benchmark tests
  • running these with the usual familiar testing machinery
  • keeping them out of the normal CI, at least once the benchmark collection is large enough to make inclusion impractical
  • including extra information, such as the number of steps, the result of a nonlinear solve, and so on, which is useful for comparison to other implementations
  • benchmarking results can be saved to a JSON file, which enables comparison across versions (check if we still get the same accuracy, if we now need more steps, and so on)

I have not included any compile time benchmarking yet, but I think this thing here can happily live next to differently flavoured pytest-codspeed benchmarks, such as discussed in https://github.com/patrick-kidger/equinox/issues/1001

I'm parsing a collection of CUTEST problems for now, and once you get rid of all the FORTRAN stuff and weird formatting these don't take up that much space and can either live here, or perhaps elsewhere if that is more practical.

johannahaffner avatar Apr 16 '25 09:04 johannahaffner