iree
iree copied to clipboard
Reduce complexity and cloud storage needs across benchmarking workflows
I spotted some low hanging fruit here: https://discord.com/channels/689900678990135345/1166024193599615006/1207085419959947294 and here: https://groups.google.com/g/iree-discuss/c/uy0L4Vdl3hs/m/YLe0iLCGAAAJ.
- The
process_benchmark_resultsstep takes around 2 minutes to download a mysteriousiree-oss/benchmark-reportDockerfile. The generation scripts only need a few Python deps (markdown_strings,requests), so they could just pip install what they need directly. - Benchmark execution jobs are spending upwards of 30 seconds checking out runtime submodules. They likely don't need any submodules at all.
- The
compilation_benchmarksjob could be folded intobuild_e2e_test_artifacts. Then, we wouldn't need to upload and store largecompile-stats/module.vmfbfiles or spend 30 seconds + downloading those files for 0-2 seconds of statistics aggregation and uploading. If there are issues (for whatever reason) with the build machine not having the right setup for uploading to the dashboard server, we could pass results via workflow artifacts. No need to send 50-100GB of (what should be) transient files over the network.