criterion.rs
criterion.rs copied to clipboard
Add a way to programmatically get benchmark results
It would be nice if a benchmarking run could return the percentiles and other statistics about the benchmark. It could be useful for multiple purposes, here are some examples:
- Programs doing some self diagnose on startup to calibrate performance settings (Think like games finding optimal settings for your hardware automatically.)
- Writing automatic CI performance monitoring services that automatically tests performance over commits and report to some system.
- Library users who would like to draw their own plots or save the results in any other format than the one built into criterion.rs.
This might be related to #15, I'm not sure what that issue want to accomplish exactly.
#15 is about letting the user run a custom analysis pass and for that we need to exposes the analysis methods (outliers, bootstrap, etc) as functions.
I think this issue should be prioritized over #15.
This would be great if this could be done, happy to help. I am currently resorting to parsing the stdout which is not ideal.
My use case is, that I want the analyzed data shipped to a database so I can run easy comparisons over time.
I would really appreviate this feature, I am currently using Criterion for automated performance regression tested, but parsing the output is not a reliable way to do it.
Hi this is very appealing feature! I wonder when can we use this?
I just wanted to hop in to ask for this as well. Currently, I'm using Criterion to compare several algorithms for a particular workload, and being able to more easily collect, correlate, and plot the output of multiple charts would be extremely useful.
As it is, I have to either parse the cargo bench output, scrape the HTML reports, or use the unstable machine-readable output - none of which are attractive options. Ideally I'd be able to grab the output of the benchmarks by, say, passing a custom function to criterion_main!().
Is this something the project is interested in supporting? I'd be happy to write up a pull request if so.
It would be most welcome. Significant architectural changes are probably needed, though.
Not sure how helpful this would be, but one workaround would be to do something like: https://gist.github.com/theJasonFan/65fa3e514a7fe7b179412f41f7c7168b
And invoke bench_function(...) in a binary.
It's quite hacky though. Note that criterion expects to know all group and benchmark names before any benchmark is run to generate plots that compare performance. So running benchmarks this way and generating benchmark names programmatically via command line input might break things.
I think one could direct the output to a different directory with: https://github.com/bheisler/criterion.rs/blob/b61121bab6305432e635c68bff3444665f0f22b6/src/lib.rs#L700.