nvbench icon indicating copy to clipboard operation
nvbench copied to clipboard

Python Comparison Scripts

Open alliepiper opened this issue 3 years ago • 9 comments

NVBench has a work-in-progress JSON output format and I'm working on a very basic python script to compare two JSON files.

We should grow this functionality into a more complete set of analysis tools. At minimum, this should cover the features provided by Google Benchmarks' excellent comparison scripts.

If anyone is interested in writing some python to help with this, let me know. I'll update this issue once I have finalized the JSON output format.

Basic Regression Testing

  • P0: Compare two json files: compare.py baseline.json test.json
  • P0: Specify a custom error threshold: compare.py --gpu-threshold 5 baseline.json test.json (gpu-threshold, cpu-threshold, batch-threshold)
  • P2: Run a benchmark executable and compare with a json compare.py baseline.json --run test.exe -b 3 -a T=[I32,U64] -a Elements[pow2]=30

These should:

  • Compare the benchmarks with the same name + config.
  • Print abs/rel changes for cpu/gpu/batch measurements.
  • Highlight any entries that exceed a threshold time.
  • Return an error code if any exceed thresholds.

Analysis modes

Compare benchmarks with different names. Answers questions:

  • How much faster is benchmark X for input type T vs. U for a variety of input sizes?
  • Does Algorithm X take more time to run than Algorithms Y for the same inputs?

These will need some way of specifying the sets of configurations to compare. Google benchmark has worked out a general syntax for specifying this, we should adapt what they've done to use the NVBench axis syntax.

Output

Ideally markdown formatted, similar to NVBench's default output.

References

alliepiper avatar Mar 29 '21 16:03 alliepiper

I'd be interested to help out here!

shwina avatar Apr 28 '21 13:04 shwina

I can help as well if you need extra hands.

vyasr avatar Apr 28 '21 15:04 vyasr

Initial work is in NVIDIA/thrust#14.

alliepiper avatar May 04 '21 16:05 alliepiper

@allisonvacanti what's next here? NVIDIA/thrust#14 helped close the gap, but I don't recall exactly how far it got us or what we still need to do. RAPIDS is making a push to formalize and analyze our benchmarks more, so migrating fully to nvbench is probably going to become a priority in the near future and I'm happy to help out in making sure that we have sufficient feature parity with google bench. CC @shwina in case you want to continue being involved too.

vyasr avatar Jan 05 '22 18:01 vyasr

Basic thresholding and comparing multiple files was added in NVIDIA/thrust#48

robertmaynard avatar Jan 05 '22 19:01 robertmaynard

There's a lot we could still do, such as filtering the results by benchmark name/index and axis values. But I don't think these are essential right now.

Are there any "must have" features for RAPIDS that we're missing?

BTW, I'm working on a branch that makes some changes to the JSON file layout to make things more consistent. I hope to have that merged by the end of the week, time permitting 🤞

alliepiper avatar Jan 10 '22 17:01 alliepiper

I assume the changes you're referring to are NVIDIA/thrust#70? It looks great! 🎉

@robertmaynard @jrhemstad @harrism any thoughts on what we would need to see in nvbench to make the transition from gbench smooth for RAPIDS?

vyasr avatar Jan 12 '22 19:01 vyasr

The gbench compare script had the ability to do a U Test between two samples to determine if there was a statistically significant difference between the populations. Do we have anything like that in nvbench yet?

It's very helpful when looking at small differences in performance to establish if the difference is just "noise" or actually meaningful.

jrhemstad avatar Jan 12 '22 19:01 jrhemstad

@vyasr Yep! That PR has all of my pending changes to the JSON/Python stuff.

@jrhemstad We don't have anything like that at the moment.


One feature that I'd like to see at some point is the ability to compare performance between different benchmarks that use the same axes.

For example, see NVIDIA/cccl#720, which points out that thrust::all_of is slower than thrust::count_if. It'd be nice to be able to write some automated tests that check the performance of equivalent algorithms and identify these sorts of issues.

alliepiper avatar Jan 12 '22 20:01 alliepiper