SciMLBenchmarks.jl
SciMLBenchmarks.jl copied to clipboard
[WIP] perf: common neural network workloads benchmarks
currently add CPU versions. should be easy to extend to GPU once SciMLBenchmarks has GPU runners available.
TODOs
- [ ] Restore Manifest to use the released versions of packages
- [ ] Boilerplate
- [x] Inference
- [x] Inference Plots
- [ ] Training
- [ ] Training Plots
- [ ] Models
- [ ] Simple MLP
- [x] Inference
- [ ] Training
- [ ] Simple MLP with BN -- wait on https://github.com/LuxDL/LuxLib.jl/pull/126
- [ ] Inference
- [ ] Training
- [ ] LeNet
- [x] Inference
- [ ] Training
- [ ] ResNet
- [ ] Inference
- [ ] Training
- [ ] Simple MLP
- [ ] Add Reactant Inference Code
- Training code will have to wait till the
compilePR lands in Lux
- Training code will have to wait till the
@thazhemadam @staticfloat what's the easiest way to set this up so that benchmarks can choose a separate runner for GPUs?
Allowing different runners based on the jmd metadata would be a nice way. Something like
---
title: Simple Neural Networks
author: Avik Pal
backend: CUDA <--- If nothing present then use CPU. In future we can allow AMDGPU/Metal/etc with the same syntax
---
Lux now matches SimpleChains in inference timings :sweat_smile:. The ones where we fall behind are because Octavian is somewhat slow on EPYC machines so it is turned off