cbor2
cbor2 copied to clipboard
Add a benchmark to compare cbor2 vs stdlib json
This produces something like
------------------------------------------------------------------- benchmark '100 arrays dict: deserialize': 2 tests --------------------------------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers(*) Rounds Iterations
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_loads[cbor2-100 arrays dict] 126.2084 (1.0) 135.6891 (1.0) 127.7965 (1.0) 3.2050 (1.0) 126.7133 (1.0) 0.6334 (1.0) 1;1 8 1
test_loads[json-100 arrays dict] 455.5744 (3.61) 486.9637 (3.59) 475.8448 (3.72) 11.8648 (3.70) 478.8540 (3.78) 8.7094 (13.75) 1;1 5 1
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
for several different dataset. It could be used to reason about possible Cython speedups...
Coverage decreased (-5.01%) to 94.195% when pulling dee0f4317c2aba1ea21154cfad2d437f98b964d7 on lelit:benchmark into 548a21c387b7575fdc76d4e4f71dfc9a6bb327e0 on agronholm:master.
Coverage decreased (-5.01%) to 94.195% when pulling dee0f4317c2aba1ea21154cfad2d437f98b964d7 on lelit:benchmark into 548a21c387b7575fdc76d4e4f71dfc9a6bb327e0 on agronholm:master.
Coverage remained the same at 99.208% when pulling dee0f4317c2aba1ea21154cfad2d437f98b964d7 on lelit:benchmark into 548a21c387b7575fdc76d4e4f71dfc9a6bb327e0 on agronholm:master.
Having a benchmark would be good but I don't think it should be run as part of the test suite. I've been using this to benchmark cbor2 so far. But if you could make this into a separate script, that would be perfect.
I will give that a look. Anyway, why do you consider inappropriate for the benchmark to be part of the test suite? It could be marked expecially and run only when desired, if needed. It can dump a JSON result to be re-elaborated in a custom way (I did that to produce a reST table to be included in documentation, for example).
The purpose of the test suite is to ensure correctness of operation. I don't see how a benchmark is relevant there.
Il 02/mag/2017 01:36, "Alex Grönholm" [email protected] ha scritto:
The purpose of the test suite is to ensure correctness of operation. I don't see how a benchmark is relevant there.
Ok, we shall agree to disagree then: IMO there is space to consider speed regression/improvements as part of that "purpose", of course when speed matters. I can't waste time rewriting already existing functionalities, sorry.
The results of benchmarking is not accurate enough to fail the test suite if the numbers are unexpected. Such benchmarks should be run by hand. I'm not entirely against using pytest for this purpose, as long as the benchmark is disabled by default and isolated in its own module.
Of course, to be merged, the tests must first pass.
I moved the benchmarks to a subfolder and made the dependencies optional.