JuMP.jl icon indicating copy to clipboard operation
JuMP.jl copied to clipboard

continuous performance testing

Open mlubin opened this issue 12 years ago • 12 comments
trafficstars

Codespeed?

mlubin avatar Sep 21 '13 23:09 mlubin

Would be nice! More to detect errant Julia changes than our own, perhaps

IainNZ avatar Sep 21 '13 23:09 IainNZ

Could we incorporate this into the Travis builds somehow?

joehuchette avatar Feb 25 '14 01:02 joehuchette

Not really, travis runs on shared VMs so it will be hard to get consistent results.

mlubin avatar Feb 25 '14 01:02 mlubin

Ping @jrevels, JuMP would benefit a lot from this

mlubin avatar Oct 02 '15 02:10 mlubin

Literally was just talking to folks at Julia Central about CI perf testing today, going to be experimenting with writing webhooks to do this in the coming week(s). I'll definitely keep you posted.

jrevels avatar Oct 02 '15 02:10 jrevels

pinging @mlubin @jrevels did you ever figure out how to do this in a clever way?

pkofod avatar Dec 22 '16 12:12 pkofod

@pkofod, there was never any substantial effort put into this

mlubin avatar Dec 22 '16 17:12 mlubin

This came up on Gitter today, so I did some investigating:

  • @ericphanson did an excellent job on Convex.jl benchmarks
    • https://github.com/jump-dev/Convex.jl/tree/master/benchmark
    • https://ericphanson.github.io/ConvexTests.jl/dev/
  • JuliaCI has packages to help
    • https://github.com/JuliaCI/Nanosoldier.jl
    • https://github.com/JuliaCI/PkgBenchmark.jl
    • https://github.com/JuliaCI/BaseBenchmarks.jl
  • PowerSimulationsDynamics use CI:
    • https://github.com/NREL-SIIP/PowerSimulationsDynamics.jl/blob/master/.github/workflows/performance_comparison.yml

I don't think we want to run the benchmarks on every commit. That'd get a bit painful. We probably just want each commit to master and the ability to run on-demand for a PR.

For the benchmarks, we probably want:

  • JuMP and MOI-specific benchmarks
    • time of using JuMP and using MathOptInterface
    • time to build simple models
    • time for various expression manipulations
    • https://github.com/jump-dev/MathOptInterface.jl/blob/master/src/Benchmarks/Benchmarks.jl
  • Solver integration benchmarks
    • How long to build and solve an LP from scratch?
    • https://github.com/jump-dev/MathOptInterface.jl/tree/master/perf/time_to_first_solve
    • See also Convex.jl
  • Another source
    • https://github.com/jump-dev/MOIPaperBenchmarks

This could all sit in a new repository (JuMPBenchmarks.jl) and push to a GitHub page with plots like

  • https://odow.github.io/progress-metrics-OAC-1835443/

So in summary, I think we have a lot of what is needed. It just needs some plumbing to put together. There is also the question of dedicated hardware for this. But I can probably be persuaded to get a small PC to sit in the corner of my office as a space-heater during winter.

odow avatar Feb 03 '22 03:02 odow

https://github.com/jump-dev/Convex.jl/tree/master/benchmark

This may have bitrotted unfortunately; we used the run benchmarks in CI, but I never remembered to look at the results (hidden in the Travis logs, at the time), so I removed it (or perhaps just didn’t replace it when we switched to GitHub Actions). It also slowed down CI a lot. That code was based off of @tkf’s, and he likely has better versions these days (maybe https://github.com/JuliaFolds/Transducers.jl/tree/master/benchmark).

So I agree also with not running it per-commit. Could be useful for it to be runnable on-demand in a PR like nanosoldier for Julia Base, so if you suspect a chance could cause a regression then you can trigger it.

It might be useful to look at how SciML does their benchmarks too: https://github.com/SciML/SciMLBenchmarks.jl. It looks also like there’s some “juliaecosystem” hardware; perhaps JuMP can get access too: https://github.com/SciML/SciMLBenchmarks.jl/blob/bda2ca650fd4fbd25e3bcdc0ddb4b43535bcd7b6/.buildkite/run_benchmark.yml#L50 (I’ve got no idea though).

ericphanson avatar Feb 03 '22 03:02 ericphanson

FYI, there's a setting to run the benchmark with label. Take a look at the setting with if: contains(github.event.pull_request.labels.*.name, 'run benchmark') in https://github.com/tkf/BenchmarkCI.jl#create-a-workflow-file-required (thanks to @johnnychen94; ref https://github.com/tkf/BenchmarkCI.jl/pull/65)

As for my recent approach, I mostly moved to set up a benchmark suite for smoke test (e.g., take only one sample) and then invoking it from the test. It's not actually continuous performance testing but rather for just avoid breaking benchmark code. But I still find it useful.

tkf avatar Feb 03 '22 04:02 tkf

Ideally once JuMP 1.0 is released, we wouldn't have to worry about breaking any benchmarks. (And if we did, that's an indication that we've done something wrong!)

There are some Julia servers for the GPU and SciML stuff that host jobs on build kite (we use one for running the SCS GPU tests). Their benchmarks are pretty heavy though. I'm envisaging some much smaller runs, so we don't need a beefy machine.

odow avatar Feb 03 '22 04:02 odow

Made progress here: https://github.com/jump-dev/benchmarks

Dashboard is available at https://jump.dev/benchmarks/

odow avatar May 06 '22 05:05 odow