github-action-benchmark
github-action-benchmark copied to clipboard
Feature request: Support iai Rust benchmarks
Rust benchmarking can be noisy in virtualized environments (GH Actions) and the best way to run the benchmarks in those is probably iai. Its format seems to be unsupported my workflow runs the benchmarks and prints them out to file : https://github.com/kirillbobyrev/pabi/runs/5079239022
From the docs:
It is intended as a complement to Criterion.rs; among other things, it's useful for reliable benchmarking in CI.
And
For benchmarks that run in CI (especially if you're checking for performance regressions in pull requests on cloud CI) you should use Iai. For benchmarking on Windows or other platforms that Valgrind doesn't support, you should use Criterion-rs. For other cases, I would advise using both. Iai gives more precision and scales better to larger benchmarks, while Criterion-rs allows for excluding setup time and gives you more information about the actual time your code takes and how strongly that is affected by non-determinism like threading or hash-table randomization. If you absolutely need to pick one or the other though, Iai is probably the one to go with.
Hm, can you share an example output from iai
?
I just implemented support for cargo-criterion's JSON output but seeing a bit of unexplained-for runtime variation on CI envs as outlined by iai
:
![Screen Shot 2023-01-05 at 3 48 13 pm](https://user-images.githubusercontent.com/175587/210703774-4202ab4d-0200-4a36-ae12-5563ccb7b757.png)
... so I might explore having JSON output on iai
and then make it compatible with this GHA now that I have its codebase loaded on my head.
Here is an example from a 'dummy' project I used to test CI and things https://github.com/Mathieu-Lala/rfc-graph
IAI is calling cachegrind, and generate files in target/iai/cachegrind.out.iai_calibration.
Note: I renamed the extension of the file to .txt
to upload it to github, the file generated is nammed cachegrind.out.iai_calibration
.
$> cargo bench --workspace --bench "*iai*"
[...]
Finished bench [optimized] target(s) in 16.48s
Running benches/iai/proof.rs (target/release/deps/iai_proof-67dd0600c5b62a17)
iai_benchmark_short
Instructions: 1733
L1 Accesses: 2359
L2 Accesses: 1
RAM Accesses: 3
Estimated Cycles: 2469
iai_benchmark_long
Instructions: 26214733
L1 Accesses: 35638618
L2 Accesses: 2
RAM Accesses: 3
Estimated Cycles: 35638733
Running benches/iai/proof2.rs (target/release/deps/iai_proof2-94cf41bea5ce4d86)
iai_benchmark_short
Instructions: 1733 (No change)
L1 Accesses: 2358 (-0.042391%)
L2 Accesses: 1 (No change)
RAM Accesses: 4 (+33.33333%)
Estimated Cycles: 2503 (+1.377076%)
iai_benchmark_long
Instructions: 26214733 (No change)
L1 Accesses: 35638617 (-0.000003%)
L2 Accesses: 2 (No change)
RAM Accesses: 4 (+33.33333%)
Estimated Cycles: 35638767 (+0.000095%)
Running benches/proof_iai.rs (target/release/deps/proof_iai-9e9bf6aaa8a5a9b1)
iai_benchmark_short
Instructions: 1733 (No change)
L1 Accesses: 2358 (No change)
L2 Accesses: 1 (No change)
RAM Accesses: 4 (No change)
Estimated Cycles: 2503 (No change)
iai_benchmark_long
Instructions: 26214733 (No change)
L1 Accesses: 35638617 (No change)
L2 Accesses: 2 (No change)
RAM Accesses: 4 (No change)
Estimated Cycles: 35638767 (No change)
Anyone implementing this is welcome to use the examples from Bencher: https://github.com/bencherdev/bencher/tree/main/lib/bencher_adapter/tool_output/rust/iai
In the mean time, if you're blocked by this, Bencher supports Iai: https://github.com/bencherdev/bencher#supported-benchmark-harnesses