Ribasim
Ribasim copied to clipboard
Use Julia package for benchmarking
In issue #462, regression test is set up for comparing output of a run with the benchmark.
@visr suggests to add runtime into benchmark and make use one of existing Julia package to illustrate the benchmark result better
This one? https://github.com/JuliaCI/PkgBenchmark.jl
I'm aware of asv in Python, and it seems there's quite a lot involved into getting consistent and commensurate benchmarks.
PkgBenchmark.jl has been around for a while but doesn't seem very actively maintained. Also for CI integration it recommends an unmaintained package.
PkgJogger.jl seems like a better maintained alternative that has nicer local workflows, with easier CI integration.
Though less mature, Lilith's work in https://chairmarks.lilithhafner.com/v1.2.2/regressions seems very nice as well. Perhaps I'd try that first.
In terms of published benchmark tracking, there is https://github.com/SciML/SciMLBenchmarks.jl but comparing commits over time like https://lux.csail.mit.edu/benchmarks/ also looks nice.
EDIT: there is now also AirspeedVelocity.jl: https://discourse.julialang.org/t/easy-github-benchmarking-with-new-airspeedvelocity-jl/129327