Formalise Benchmarks
Rationale
Create a formal benchmark pipeline to compare
- Python
- PythonCall (dev)
- PythonCall (stable)
- PyCall
Originally posted by @cjdoris in https://github.com/cjdoris/PythonCall.jl/issues/300#issuecomment-1547350528
Requirements
- Match benchmark cases across suites
- Use the same Python executable across all interfaces
- Store multiple results or condensed statistics
- Track memory usage
Comments
Julia Side
Most benchmarking tools in Julia run atop BenchmarkTools.jl^BenchmarkTools.jl and using their interface to define test suites and store results is the way to go. Both PkgBenchmark.jl^PkgBenchmark.jl and AirspeedVelocity.jl^AirspeedVelocity.jl provide functionality to compare multiple versions of a single package. Yet, they don't support comparison across multiple packages out-of-the-box. There will be some homework for us in building the right tools for this slightly generalized toolset.
Important to say that PkgBenchmark.jl has useful methods in its public API that we could leverage to build what we need. This includes methods for comparison between suites and for exporting those results to Markdown. AirspeedVelocity.jl is only made available through the CLI.
Python Side
In order to enjoy the same level of detail providede by BenchmarkTools.jl, we should adopt pyperf^pyperf. There are many ways to use it, but a few experiments showed that the CLI + JSON interface is probably the desired option.
For each test case, stored in the PY_CODE variable, we would then create a temporary path JSON_PATH and run
run(`$(PY_EXE) -m pyperf timeit "$(PY_CODE)" --append="$(JSON_PATH)" --tracemalloc`)
After that, we should be able parse the output JSON and convert it into a PkgBenchmark.BenchmarkResults object. This makes it easier for integrating those results in the overall machinery, reducing the problem to setting the Python result as the reference value.
Tasks
- [ ] Implement the reference Python benchmark cases
- [ ] Implement the corresponding versions in the other suites
- [ ] PythonCall (dev)
- [ ] PythonCall (stable)
- [ ] PyCall
- [ ] Write a translator for pyperf JSON into
BenchmarkResults - [ ] Write comparison tools
- [ ] Write report generator
- [ ] Setup GitHub actions
Resources
References
This issue has been marked as stale because it has been open for 60 days with no activity. If the issue is still relevant then please leave a comment, or else it will be closed in 7 days.
This issue has been closed because it has been stale for 7 days. You can re-open it if it is still relevant.
IMO this is still relevant, it should be re-opened and added to a milestone so that it is not automatically re-closed as stale.
Indeed, I like this PR, just haven't had a chance to properly review it.
I had a similar task in another project and some of the ideas converged to slightly different approaches. I will be happy to update this PR soon, probably during the next weekend.
Sounds good!
Since PythonCall is not v1 yet, we have to decide on how we want to compare the different branches under interface changes. Are we going to keep separate suites for dev and stable or not?