unconf17
unconf17 copied to clipboard
Track package performance over time
Covr and codecov.io are great for tracking code coverage during package's development. Another aspect of a package that would be useful to track is the performance of one or more benchmark functions.
This is useful for package authors to ensure they don't inadvertently introduce a performance regression when adding new features. Also useful for users to see if how much a new version improves or reduces performance. Could also running the benchmarks when a PR is submitted, to see how the changes impact current performance.
I wrote a rough example at https://github.com/jimhester/benchthat and @krlmlr has dplyr specific code to do this at https://krlmlr.github.io/dplyr.benchmark/.
Some useful features to me would be
- Store the results in a easy to parse file in the repository (My draft puts them in
/docs/benchmarks) - Helper functions that are easy to run automatically in a package's tests.
- Run a benchmark retroactively over the repo history.
- Is there a peak finding algorithm / git bisect we could use to find performance breakpoints so you don't have to exhaustively benchmark each commit?
- Visualizing and reporting on benchmark results.
There's also https://github.com/analyticalmonk/Rperform
Rperform seems like it already does most of this, but clearly needs more exposure / use and possibly some thought into better integration into pkgdown / travis so it is more useful for PR results.
I love this idea!
Question re: outside support:
codecov.io is to code coverage as ??? is to benchmarking
Or does this aspect have to be handled by the package described here? The display of results over time could potentially be handled in pkgdown site.
I don't know of anything like codecov.io for tracking benchmarking over time. If there was something we could use it or maybe setup a simple service to do so.
codecov.io is to code coverage as ??? is to benchmarking
https://github.com/tobami/codespeed is only one I know of. You need to run your own service.
Julia used to run it, I don't know if they still do.
The Julia site used to be at http://speed.julialang.org/
It is gone.
Seems to me that it would be logical to integrate performance testing with testthat.