minitrace-rust
minitrace-rust copied to clipboard
Continuous benchmark
As we care performance a lot, it may be necessary to run benchmarks for each PR submission.
We could try utilizing GitHub Action for continuous benchmarking, by comparing to relative regressions. Ref https://github.com/ethereumjs/ethereumjs-monorepo/issues/897
High priority IMO. Regressions are devilishly difficult to identify after the fact. The criterion author had a crate targeting CI use cases. Let me try to dig it up.
IMO the approach to take is to create benchmarks using iai rather than criterion - not that you can't do both. However, I believe if you want to accept/reject/evaluate the performance impact of a PR using github actions, or some such tooling, you need to abstract the hardware/virtualware which leaves one looking at iai.
Not sure if there are alternatives.