Benchmarks.jl
Benchmarks.jl copied to clipboard
Benchmarking from the REPL broken without wrapper function?
julia> f(x) = sin(sin(sin(sin(sin(sin(sin(x)))))))
f (generic function with 1 method)
julia> @benchmark f(3.33)
================ Benchmark Results ========================
Time per evaluation: 84.60 ns [84.40 ns, 84.80 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
Number of allocations: 0 allocations
Number of samples: 8301
Number of evaluations: 5454801
R² of OLS model: 0.999
Time spent benchmarking: 0.50 s
julia> y(x) = sin(x)
y (generic function with 1 method)
julia> @benchmark y(3.33)
================ Benchmark Results ========================
Time per evaluation: 13.86 ns [13.70 ns, 14.03 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
Number of allocations: 0 allocations
Number of samples: 10201
Number of evaluations: 33352001
R² of OLS model: 0.962
Time spent benchmarking: 0.51 s
julia> @benchmark sin(sin(sin(sin(sin(sin(sin(3.33)))))))
================ Benchmark Results ========================
Time per evaluation: 6.53 ns [6.49 ns, 6.57 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
Number of allocations: 0 allocations
Number of samples: 11001
Number of evaluations: 71490001
R² of OLS model: 0.989
Time spent benchmarking: 0.53 s
julia> @benchmark sin(3.33)
================ Benchmark Results ========================
Time per evaluation: 13.66 ns [13.52 ns, 13.80 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
Number of allocations: 0 allocations
Number of samples: 10201
Number of evaluations: 33352001
R² of OLS model: 0.970
Time spent benchmarking: 0.50 s
I think this is because we removed the no-online from here: https://github.com/johnmyleswhite/Benchmarks.jl/blob/f5ef97e2ca1adf80c47064f8616e77b4b76b3b8c/src/benchmarkable.jl#L68
At some point, we need to pick a set of N examples that we want to get right and see whether that's actually achievable at all. As is, we seem to be oscillating because it's not clear there is a single correct solution.
cc @mbauman
Working as intended (but needs to be documented).
The latest swing of the pendulum has @benchmark
working with the same semantics as the @code_*
macros — it evaluates all arguments and only benchmarks the outermost function.
Oh, wait, I missed the factor of two difference between @benchmark sin(3.33)
and @benchmark sin(sin(3.33))
. ~~That's definitely wonky.~~
Edit: Aha, that's because sin
's performance is dependent upon its inputs. I imagine that it's doing extra work to bring 3.33 into the domain it works in. @benchmark sin(-.18)
has comparable performance to sin(sin(3.33))
.
What does sin(-0.18729466354290317)
produce? I assume there's no real speed difference, but worth making sure.
(Yah, see my edit.)
Ok. Do you mind documenting this, Matt? You clearly have a better understanding of how we ended up with the current approach.
Sure, I can throw something together this weekend.
Many, many thanks.
Ok, that make's sense. Thanks for the clarification.