iree icon indicating copy to clipboard operation
iree copied to clipboard

Reduce complexity and cloud storage needs across benchmarking workflows

Open ScottTodd opened this issue 1 year ago • 0 comments

I spotted some low hanging fruit here: https://discord.com/channels/689900678990135345/1166024193599615006/1207085419959947294 and here: https://groups.google.com/g/iree-discuss/c/uy0L4Vdl3hs/m/YLe0iLCGAAAJ.

  • The process_benchmark_results step takes around 2 minutes to download a mysterious iree-oss/benchmark-report Dockerfile. The generation scripts only need a few Python deps (markdown_strings, requests), so they could just pip install what they need directly.
  • Benchmark execution jobs are spending upwards of 30 seconds checking out runtime submodules. They likely don't need any submodules at all.
  • The compilation_benchmarks job could be folded into build_e2e_test_artifacts. Then, we wouldn't need to upload and store large compile-stats/module.vmfb files or spend 30 seconds + downloading those files for 0-2 seconds of statistics aggregation and uploading. If there are issues (for whatever reason) with the build machine not having the right setup for uploading to the dashboard server, we could pass results via workflow artifacts. No need to send 50-100GB of (what should be) transient files over the network.

ScottTodd avatar Feb 13 '24 23:02 ScottTodd