pytest-benchmark
pytest-benchmark copied to clipboard
Allow to configure precision or be smarter about it
From pytest-benchmarks's test suite:
test_single 1192.0929
test_setup 1907.3486
test_args_kwargs 1907.3486
test_iterations 190.7349
test_rounds_iterations 95.3674
The benchmark report is 188 characters wide but I think it contains a lot of noise: the chances are, you wouldn't really care about precision to 4 digits if the number is in the thousands or millions range.
You could do something like this (precision depends on the value):
test_single 1192
test_setup 1907
test_args_kwargs 1907
test_iterations 190
test_rounds_iterations 95.4
And maybe add thousands separators as well:
test_single 1,192
test_setup 1,907
test_args_kwargs 1,907
test_iterations 190
test_rounds_iterations 95.4
You could have a fixed precision mode (like now, precision == 4), or auto precision mode, which could be formatted like so:
0.00001 : < 1e-4
0.00012 : 0.0001
0.00123 : 0.0012
0.01234 : 0.0123
0.12345 : 0.1234
1.23456 : 1.234
12.3456 : 12.34
123.456 : 123.4
1234.56 : 1,234
(as for the small values, could print them as 0.0000 or indicate explicitly that they fall below precision range)
Tbh I like that there is alignment in the output. If we don't alway show the fractional part the the alignment is lost.
On the other hand users should be able to customize the output - I'll think about this. Maybe a hook that can provide a custom reporter; and the builtin reporter be a class that you can easily customize?
On Sunday, September 13, 2015, Ivan Smirnov [email protected] wrote:
From pytest-benchmarks's test suite:
test_single 1192.0929 test_setup 1907.3486 test_args_kwargs 1907.3486 test_iterations 190.7349 test_rounds_iterations 95.3674
The benchmark report is 188 characters wide but I think it contains a lot of noise: the chances are, you wouldn't really care about precision to 4 digits if the number is in the thousands or millions range.
You could do something like this:
test_single 1192 test_setup 1907 test_args_kwargs 1907 test_iterations 190 test_rounds_iterations 95.4
— Reply to this email directly or view it on GitHub https://github.com/ionelmc/pytest-benchmark/issues/20.
Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
Well, I guess you could do the auto-precision based on the minimum value, for instance, like
test_single 1192.1
test_setup 1907.1
test_args_kwargs 1907.3
test_iterations 190.3
test_rounds_iterations 95.4
This would still be a big win when the range between min and max is not excessively big.
You could also take the minimum value of all columns with the same scale (i.e. min/max/median/mean) and use the same precision for all, for consistency sake.
Maybe add a command like arg for it, like
-
--benchmark-rounding=auto
-- is based on the minimum value for columns with the same scale -
--benchmark-rounding=NUM
-- sets precision for all floating-point columns explicitly to that number
The report generator needs some heavy refactorings to support this, and maybe subclassing/customization too.
I wonder if the data generation and reporting logic could (should) be actually split up.
Like, you collect the actual data in one big dict, and then you can choose it report either to terminal (default, but might as well be disabled), image, text file, html file -- these are all benchmark data consumers, there's no logical difference between them.
Kind of like in coverage
you can first collect the coverage data, and then generate whatever reports you want like term
/ html
/ xml
etc.