linfa icon indicating copy to clipboard operation
linfa copied to clipboard

Improving and extending benchmarks

Open bytesnake opened this issue 4 years ago • 0 comments

One area where we are lacking right now is the benchmarking coverage. I would like to improve that in the coming weeks.

Infrastructure for benchmarking

Benchmarks are an essential part of linfa. They should give feedback for contributors on their implementations and users confidence that we're doing good work. In order to automate the process we have to employ an CI system which creates a benchmark report on (a) PR (b) commits to master branch. This is difficult with wall-clock benchmarks (aka criterio.rs) but possible with valgrind.

  • [ ] use iai for benchmarking
  • [ ] add a workflow executing the benchmark on PR/commits to master and create reports in JSON format
  • [ ] build a script parsing reports and posting it as comments to PR (see here)
  • [ ] add a page to the website which displays reports in a human-readable way
  • [ ] (pro) use polynomial regression to find influence of predictors (e.g. #weights, #features, #samples, etc.) to targets (e.g. L1 cache misses, cycles etc.) and post the algorithmic complexity as well

bytesnake avatar Mar 19 '21 09:03 bytesnake