PythonCall.jl icon indicating copy to clipboard operation
PythonCall.jl copied to clipboard

Formalise Benchmarks

Open pedromxavier opened this issue 2 years ago • 7 comments

Rationale

Create a formal benchmark pipeline to compare

  • Python
  • PythonCall (dev)
  • PythonCall (stable)
  • PyCall

Originally posted by @cjdoris in https://github.com/cjdoris/PythonCall.jl/issues/300#issuecomment-1547350528

Requirements

  1. Match benchmark cases across suites
  2. Use the same Python executable across all interfaces
  3. Store multiple results or condensed statistics
  4. Track memory usage

Comments

Julia Side

Most benchmarking tools in Julia run atop BenchmarkTools.jl^BenchmarkTools.jl and using their interface to define test suites and store results is the way to go. Both PkgBenchmark.jl^PkgBenchmark.jl and AirspeedVelocity.jl^AirspeedVelocity.jl provide functionality to compare multiple versions of a single package. Yet, they don't support comparison across multiple packages out-of-the-box. There will be some homework for us in building the right tools for this slightly generalized toolset.

Important to say that PkgBenchmark.jl has useful methods in its public API that we could leverage to build what we need. This includes methods for comparison between suites and for exporting those results to Markdown. AirspeedVelocity.jl is only made available through the CLI.

Python Side

In order to enjoy the same level of detail providede by BenchmarkTools.jl, we should adopt pyperf^pyperf. There are many ways to use it, but a few experiments showed that the CLI + JSON interface is probably the desired option.

For each test case, stored in the PY_CODE variable, we would then create a temporary path JSON_PATH and run

run(`$(PY_EXE) -m pyperf timeit "$(PY_CODE)" --append="$(JSON_PATH)" --tracemalloc`)

After that, we should be able parse the output JSON and convert it into a PkgBenchmark.BenchmarkResults object. This makes it easier for integrating those results in the overall machinery, reducing the problem to setting the Python result as the reference value.

Tasks

  • [ ] Implement the reference Python benchmark cases
  • [ ] Implement the corresponding versions in the other suites
    • [ ] PythonCall (dev)
    • [ ] PythonCall (stable)
    • [ ] PyCall
  • [ ] Write a translator for pyperf JSON into BenchmarkResults
  • [ ] Write comparison tools
  • [ ] Write report generator
  • [ ] Setup GitHub actions

Resources

References

pedromxavier avatar May 16 '23 03:05 pedromxavier

This issue has been marked as stale because it has been open for 60 days with no activity. If the issue is still relevant then please leave a comment, or else it will be closed in 7 days.

github-actions[bot] avatar Aug 19 '23 17:08 github-actions[bot]

This issue has been closed because it has been stale for 7 days. You can re-open it if it is still relevant.

github-actions[bot] avatar Aug 27 '23 01:08 github-actions[bot]

IMO this is still relevant, it should be re-opened and added to a milestone so that it is not automatically re-closed as stale.

LilithHafner avatar Sep 21 '23 17:09 LilithHafner

Indeed, I like this PR, just haven't had a chance to properly review it.

cjdoris avatar Sep 21 '23 21:09 cjdoris

I had a similar task in another project and some of the ideas converged to slightly different approaches. I will be happy to update this PR soon, probably during the next weekend.

pedromxavier avatar Sep 21 '23 21:09 pedromxavier

Sounds good!

cjdoris avatar Sep 21 '23 21:09 cjdoris

Since PythonCall is not v1 yet, we have to decide on how we want to compare the different branches under interface changes. Are we going to keep separate suites for dev and stable or not?

pedromxavier avatar Sep 28 '23 00:09 pedromxavier