forest
forest copied to clipboard
Forest-vs-forest benchmarking (PRs)
Issue summary
To catch regressions, we should (on request) run the forest-tool benchmark commands of the current PR against the latest released version. The benchmarks will be run on fuzzy and pasted in the PR that requested the run.
The output could look like this:
| Benchmark | PR | 0.13.1 | Change |
|---|---|---|---|
car-streaming |
17s | 20s | 0.85 |
forest-encoding |
45 | 46 | 0.98 |
graph-traversal |
10s | 10s | 1.00 |
graph-traversal (mem) |
3.8 GiB | 3.5 GiB | 1.09 |
Tasks:
- [ ] Create Rust script:
- [ ] Download a
mainnetsnapshot if one isn't available on disk. - [ ] Fetch the previous release binaries of Forest.
- [ ] Compile the current PR in release mode.
- [ ] Run benchmark commands with
forest-toolfor both the PR and the latest release. - [ ] Measure runtime and peak memory usage (measure directly, ignore output from
forest-tool).
- [ ] Download a
- [ ] Format data as a markdown table and (optionally) as JSON.
- [ ] If a benchmark command fails or doesn't exist, mark its results with a dash:
-. - [ ] The relative change is the PR measurement divided by the previous release's measurement.
- [ ] If a benchmark command fails or doesn't exist, mark its results with a dash:
- [ ] Trigger script on PR comments containing a keyword (i.e.,
!bench). See the linked workflow fromgrpc_bench.
Other information and links
Trigger workflow on comment: https://github.com/LesnyRumcajs/grpc_bench/blob/master/.github/workflows/issue_bench.yml