mdanalysis
mdanalysis copied to clipboard
performance benchmarks
It might be worth considering the use of simple performance benchmarks for the project over time -- I know that airspeed velocity is commonly used for large python projects and they have a github badge you can use. I believe it typically ends up in a separate repo.
It may serve to motivate performance improvements, especially if it can be used to reduce the barrier to benchmarking multiple versions of the same portion of code over time -- which can otherwise be a bit of a pain to do manually. So if you're thinking of making some portion of the code faster, but part of the 'activation energy' for doing so is the overhead to show that it actually is faster compared to some standard benchmark, that might be partially / entirely alleviated.
The more obvious purpose is probably to prevent performance regressions.
On the other hand, it is more work, but it could probably just be built up slowly over time.
I started to include some crude benchmarks for blog posts announcing new versions. If we keep doing that we can build them up slowly.
We should make an effort to cover more parts of the code with ASV benchmarks. The performance regressions alone are already helpful.
If we had a simple tutorial for developers how they could easily ASV-check their own code then that might motivate some people to (1) check code for performance, (2) write the ASV benchmark together with their code.