flux
flux copied to clipboard
Benchmark results
Hey! I think, it can be nice to have benchmark results per commit or once-generated at least to understand how it is efficient =)
Hi, thanks for the suggestion! I agree that this would be good. If possible it would be nice to build benchmarks for each pull request, as we do with CodeCov reports, to make sure we aren't accidentally introducing any performance regressions.
We do build the couple of benchmarks we have as part of the CI pipeline to make sure they compile, but we don't actually run them. This wouldn't be too difficult to change, but we'd still need some way of processing the output, ideally in a way that integrates well with Github. Perhaps there are Github Actions scripts already available which do that?
Beyond that, we'd probably need quite a few more benchmark tests than the two we have at the moment. Ideally these would compare a Flux pipeline with the equivalent C++20 ranges pipeline as a baseline, and possibly with a "raw loop" version as well to see how well it compares.
I can recommend this one: https://github.com/benchmark-action/github-action-benchmark
Easy to configure, but graphs are too simple =)
Yeah, i think, it have to compare with raw loops and ranges. At least to see, if it doesn't provide too much performance penalty =)
I think nanobench can output a json format that is compatible with pyperf which you can use to check for performance regressions.
I can recommend this one: https://github.com/benchmark-action/github-action-benchmark
Thanks for the link! I've been looking for something like this.
Yeah, i think, it have to compare with raw loops and ranges. At least to see, if it doesn't provide too much performance penalty =)
Also a compile-time impact benchmark, it's often overlooked.