SHARK
SHARK copied to clipboard
Improvements to pytest --benchmark option.
Several features/improvements to SHARK's pytest --benchmark option are tracked in this issue:
- [x] Improve "frontend" / MLIR dialect argument transmission through SharkBenchmarkRunner
- [x] Verify benchmark results for PyTorch+CUDA on Vision Models.
- [x] Benchmarks in CI should upload bench_results.csv to gs://iree-shared-files/nod-perf/bench_results/{Y-M-D}/bench_results_{cpu/gpu}_{github-SHA}.csv (#241 )
- [x] Update README with benchmarking instructions. (#239 )
- [ ] Enable pytest
--benchmarkfor TensorFlow shark tank module tests. - [x] Add options to setup_venv.sh for ONNX benchmarking requirements
- [ ] Benchmarks should be able to produce ONNX results and provide better data in generated results. (see: nod-ai/transformer-benchmarks)
- [x] Make benchmark results more accessible -- upload to gs://shark-public/builder/...
- [ ] Thread counts
- [ ] save compile-time flags
- [ ] useful logs, traces, etc.
- [x] metadata
- [x] comparison %'s