benchmark icon indicating copy to clipboard operation
benchmark copied to clipboard

Add support for thresholds file

Open WilliamTambellini opened this issue 8 years ago • 3 comments

Good morning all I've read as much as I could the doc and list of issues on github but could nt find a reply regarding this question. If I understand right this benchmark library offers nice way to register and benchmark different functions. But I did nt find a way for the library to :

  • compare the timing (real, cpu, ...) with given threshold values
  • reports which benchmark passed and which one failed if the time is higher than the threshold I have tested this following solution but there should be better :
  • run the benchmark and output to json
  • use for example the jq command line tool in order to test that each benched functions are within the desired threshold (benchresults.json contains the json example as given in the main .md doc) : jq -e '.benchmarks[] | select(.name == "BM_SetInsert/1024/1").real_time < 30000' ~/tmp/benchresults.json

This is working but I m wondering if these tests should be better done inside the library. For instance, the user could provide via the commandline a "threshold" file (--treshold_file or whatever) giving the thresholds (time limits) for each benched functions : for example, in csv format : "BM_SetInsert/1024/1", 30000 "BM_SetInsert/1024/8", 33000 "BM_SetInsert/1024/10", 32000

RFC time. Cheers W.

WilliamTambellini avatar Feb 02 '17 23:02 WilliamTambellini

You're right, that's not part of the library. Perhaps it could be a supplemental tool, much like the 'compare_bench.py' tool in tools that compares two sets of benchmarks.

The library is really focused on the benchmark registration and running part. Any extra things like this, or like continuous benchmarking services, would be better served as separate tools i think. However, having them in the repository is welcome!

On Thu, Feb 2, 2017 at 3:56 PM, WilliamTambellini [email protected] wrote:

Good morning all I've read as much as I could the doc and list of issues on github but could nt find a reply regarding this question. If I understand right this benchmark library offers nice way to register and benchmark different functions. But I did nt find a way for the library to :

  • compare the timing (real, cpu, ...) with given threshold values
  • reports which benchmark passed and which one failed if the time is higher than the threshold I have tested this following solution but there should be better :
  • run the benchmark and output to json
  • use for example the jq command line tool in order to test that each benched functions are within the desired threshold (benchresults.json contains the json example as given in the main .md doc) : jq -e '.benchmarks[] | select(.name == "BM_SetInsert/1024/1").real_time < 30000' ~/tmp/benchresults.json

This is working but I m wondering if these tests should be better done inside the library. For instance, the user could provide via the commandline a "threshold" file (--treshold_file or whatever) giving the thresholds (time limits) for each benched functions : for example, in csv format : "BM_SetInsert/1024/1", 30000 "BM_SetInsert/1024/8", 33000 "BM_SetInsert/1024/10", 32000

RFC time. Cheers W.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/google/benchmark/issues/339, or mute the thread https://github.com/notifications/unsubscribe-auth/AAfIMtskiQ61_y_VS91mL1fkaZK18rjXks5rYm0zgaJpZM4L1y-f .

dmah42 avatar Feb 02 '17 23:02 dmah42

Hi Dominic Thanks. Indeed I was thinking about enhancing the "compare_bench.py" but was then remembering the drawback to deal with python 2 vs python 3 thing and the dependency upon python globally. Anyway fair/good enough. So would you prefer me to : 1- enhance compare_bench.py inorder to accept a new CLI option ("--check_threshold" or ....) to do the threshold check comparison and return 0 or 1 if any timing in input1 is higher than timings in input2 (per bench of course) ? or 2- create a new python script, as instance "check_benchs.py", or whatever that would basically do very similar to compare_bench.py but a little more ?

WilliamTambellini avatar Feb 03 '17 00:02 WilliamTambellini

compare_benchs.py is a pretty small script. I would add another different script. But try to use and expand the existing python library.

EricWF avatar Feb 03 '17 01:02 EricWF