ParetoSmooth.jl
ParetoSmooth.jl copied to clipboard
Performance Tests
Would be nice to make sure PRs don't degrade performance as part of our tests; no idea how this would be done but there's probably some way to go about it.
I agree, a benchmarking GitHub Action, either run automatically or triggered via a label, would be good.
Since you and I have been noting opposite trends comparing the performance of ParetoSmooth.jl and PSIS.jl, I think it makes sense to agree on what even is a good benchmark for psis. How about this?
using ParetoSmooth, Random, BenchmarkTools
Random.seed!(42)
log_ratios = randn(100, 1_000, 4)
r_eff = ones(size(log_ratios, 1))
@benchmark psis($log_ratios; r_eff=$r_eff)
I agree, a benchmarking GitHub Action, either run automatically or triggered via a label, would be good.
Since you and I have been noting opposite trends comparing the performance of ParetoSmooth.jl and PSIS.jl, I think it makes sense to agree on what even is a good benchmark for
psis. How about this?using ParetoSmooth, Random, BenchmarkTools Random.seed!(42) log_ratios = randn(100, 1_000, 4) r_eff = ones(size(log_ratios, 1)) @benchmark psis($log_ratios; r_eff=$r_eff)
Looks pretty good! Do you know how we'd set up a benchmarking GitHub Action?
Although, I would suggest a larger input for log_ratios, maybe 1000, since I'd like to know how it deals with large datasets (where performance matters the most).