Consider tracking performance across package releases.
We currently have no way to measure the performance of some functions in the package, or to track performance regressions of said functions.
Packages like pytest-benchmark plug directly into pytest, but Python also comes with its own profiler and visualizer.
There’s a related discussion here: https://github.com/python/mypy/issues/14187
At risk of getting repetitive, I strongly believe benchmarking is crucial to, at least, know where we're at. First rule of optimization: Measure, don't guess.
That being said, I LOVE those stats and histograms pytest-benchmark generates. However, AFAIU the call to benchmark has to be explicit for each test we'd want to benchmark. Would it make sense to expose a decorator in the template such that tests can be easily wrapped into a benchmark call?
Regarding the Python profilers, I have mixed feelings. I use them often and extensively but I've found it to not play particularly well with multiple processes. And I'm yet to find a way to easly profile memory in Python.