[WIP] Setup benchmark suite
xref: https://github.com/Point72/csp/issues/38
Following @trallard's suggestion, exploring using asv to build a benchmark suite, starting with @AdamGlustein's code in https://github.com/Point72/csp/pull/322#discussion_r1676210203.
Example output, running on just my machine through a few commits:
I had missed the original discussion at #322 , that explains where a lot of this came from. Would be good to tidy up in any case. There are a whole bunch of benchmark tests that I want to write after...
Example from other PR:
I had missed the original discussion at #322 , that explains where a lot of this came from. Would be good to tidy up in any case. There are a whole bunch of benchmark tests that I want to write after...
That benchmark script was just something I hacked together, I didn't intend for it to be a canonical example. If we are making standard benchmarks then we should use a more general framework.
If we are making standard benchmarks then we should use a more general framework.
I did a first pass approximation of a more general framework, extnded to the quantile function to demonstrate https://github.com/Point72/csp/blob/fb36b1b6f448d8c39aaab4edfb4b0ba7eed82ed7/csp/benchmarks/stats/basic.py
@arhamchopra would be good to see if we can use our benchmark infrastructure to run csp benchmarks as well, and maybe fix some of the ugliness I introduced in this PR if we can.