cobrapy
cobrapy copied to clipboard
Automatically run benchmarks using GitHub Actions
We should make a benchmark action in the CI. That would be a good way to track those issues.
Originally posted by @cdiener in https://github.com/opencobra/cobrapy/issues/997#issuecomment-683314766
What is a good interval to run this? With every push to devel? At every release (that seems too rarely)? Ideas welcome. It'd be nice to automatically visualize the results, too.
Do we have a limit on CI minutes? Every push seems fine. Maybe every push to stable if that is too much. I would also only run the benchmarks and not the other tests which should make it a bit faster. Then save the json and export as artifact so we can use it with the compare script we had at some point.
I would say with every merge to devel
and stable
would be good options. With every merge to devel
, we would be informed of anything weird and thus we can be sure of what to expect when we eventually merge changes into stable
.
Yeah, I think running benchmarks only on every push should be fine. Maybe only on the latest Python? Or only on 3.6?
Also, it'd be pretty cool to commit results to a benchmark
branch and display performance over time as plots somewhat similar to the memote history report.
Latest stable Python sounds good (so 3.8 right now). Here is the script we used when we switched to optlang: https://gist.github.com/cdiener/f326c33f331b370c6596fcf83d9d4bb4. Hope it still works. Can you trigger actions when another action finishes? If yes this should not be too hard, I could take a stab at it if nobody else plans to. We could also push to the wiki instead of a branch. May be easier to find and you could just fill a markdown template.
Can you trigger actions when another action finishes?
I was just thinking to use a separate workflow that runs on push. So it would run in parallel to the tests. Otherwise you can add another job like the release job but I think running in parallel rather than waiting after the tests is fine.
We could also push to the wiki instead of a branch. May be easier to find and you could just fill a markdown template.
My reason for a branch was to be able to record and commit each benchmark result from devel but plotting the result to the wiki would be nice indeed!
Oh yeah, I initially though we need to combine the results across all OSs. But better to do it separately. I would have pushed the raw JSONs to the wiki as well.