pytest-benchmark
pytest-benchmark copied to clipboard
Accessing results of benchmark to catch performance regressions?
I would like to do something like:
def test_perf(benchmark):
results = benchmark(some_fn)
if results.median_ms > 2000:
raise Exception('Performance regression!')
This is obviously not the current API of benchmark (results is the return of some_fn) -- but can this be done some other way?
You can probably access various internal stats on the fixture (eg: benchmark.stats.min) but I have to wonder if there's a better way to deal with this.
pytest-benchmark has regression checks: http://pytest-benchmark.readthedocs.io/en/latest/comparing.html (the --benchmark-compare-fail). Perhaps you need per test checks? Lets talk a bit about the particularities of your case.
Is an absolute check with no comparison to prev runs (like in your example) is all you need?
Thanks for the pointer, sorry I missed this in the docs 😦
I think this should work! The one question/concern I have is whether it is possible to apply different strictness to different benchmarks? Being able to introspect current (and, I guess, previous saved benchmarks!) seems like it would support this for advanced/edge cases. Unless there already is a way to do it!
Thanks again!
Currently not possible. I could add some marker api (high preference for declarative over imperative, which you can already do anyway):
@pytest.mark.benchmark(stats_median_lt=123)
# ooor
# @pytest.mark.benchmark(stats='median<123')
def test_perf(benchmark):
results = benchmark(some_fn)
Still not set on the exact API for specifying the constraints. If you got suggestions they are welcome.
+1 - I'd love to see this. Also a way to give a specific file/path or string based "previous run" to regress against.