cbor2 icon indicating copy to clipboard operation
cbor2 copied to clipboard

Add a benchmark to compare cbor2 vs stdlib json

Open lelit opened this issue 9 years ago • 9 comments

This produces something like

------------------------------------------------------------------- benchmark '100 arrays dict: deserialize': 2 tests --------------------------------------------------------------------
Name (time in ms)                          Min                 Max                Mean             StdDev              Median               IQR            Outliers(*)  Rounds  Iterations
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_loads[cbor2-100 arrays dict]     126.2084 (1.0)      135.6891 (1.0)      127.7965 (1.0)       3.2050 (1.0)      126.7133 (1.0)      0.6334 (1.0)              1;1       8           1
test_loads[json-100 arrays dict]      455.5744 (3.61)     486.9637 (3.59)     475.8448 (3.72)     11.8648 (3.70)     478.8540 (3.78)     8.7094 (13.75)            1;1       5           1
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

for several different dataset. It could be used to reason about possible Cython speedups...

lelit avatar Oct 31 '16 16:10 lelit

Coverage Status

Coverage decreased (-5.01%) to 94.195% when pulling dee0f4317c2aba1ea21154cfad2d437f98b964d7 on lelit:benchmark into 548a21c387b7575fdc76d4e4f71dfc9a6bb327e0 on agronholm:master.

coveralls avatar Oct 31 '16 16:10 coveralls

Coverage Status

Coverage decreased (-5.01%) to 94.195% when pulling dee0f4317c2aba1ea21154cfad2d437f98b964d7 on lelit:benchmark into 548a21c387b7575fdc76d4e4f71dfc9a6bb327e0 on agronholm:master.

coveralls avatar Oct 31 '16 16:10 coveralls

Coverage Status

Coverage remained the same at 99.208% when pulling dee0f4317c2aba1ea21154cfad2d437f98b964d7 on lelit:benchmark into 548a21c387b7575fdc76d4e4f71dfc9a6bb327e0 on agronholm:master.

coveralls avatar Oct 31 '16 16:10 coveralls

Having a benchmark would be good but I don't think it should be run as part of the test suite. I've been using this to benchmark cbor2 so far. But if you could make this into a separate script, that would be perfect.

agronholm avatar Apr 26 '17 22:04 agronholm

I will give that a look. Anyway, why do you consider inappropriate for the benchmark to be part of the test suite? It could be marked expecially and run only when desired, if needed. It can dump a JSON result to be re-elaborated in a custom way (I did that to produce a reST table to be included in documentation, for example).

lelit avatar May 01 '17 09:05 lelit

The purpose of the test suite is to ensure correctness of operation. I don't see how a benchmark is relevant there.

agronholm avatar May 01 '17 23:05 agronholm

Il 02/mag/2017 01:36, "Alex Grönholm" [email protected] ha scritto:

The purpose of the test suite is to ensure correctness of operation. I don't see how a benchmark is relevant there.

Ok, we shall agree to disagree then: IMO there is space to consider speed regression/improvements as part of that "purpose", of course when speed matters. I can't waste time rewriting already existing functionalities, sorry.

lelit avatar May 01 '17 23:05 lelit

The results of benchmarking is not accurate enough to fail the test suite if the numbers are unexpected. Such benchmarks should be run by hand. I'm not entirely against using pytest for this purpose, as long as the benchmark is disabled by default and isolated in its own module.

agronholm avatar May 01 '17 23:05 agronholm

Of course, to be merged, the tests must first pass.

agronholm avatar May 01 '17 23:05 agronholm

I moved the benchmarks to a subfolder and made the dependencies optional.

Sekenre avatar Jun 02 '23 13:06 Sekenre