benchmark
benchmark copied to clipboard
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
We should support different CUDA versions in https://github.com/pytorch/benchmark/blob/main/.github/scripts/run-config.py The config (https://github.com/pytorch/benchmark/blob/main/configs/detectron2_speedup/fx2trt-speedup-fp32.yaml) should support another dimension: `cuda_version`
- MNIST - MNIST_HogWild - WLM transformers - WLM LSTM Also, we need to run release testing twice per month to keep track of the potential regressions.
Bumps [numpy](https://github.com/numpy/numpy) from 1.21.2 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...
We should batch-per-second metrics in both run.py and run_sweep.py
Not sure if this feature should go here or elsewhere, but here it is: As a library implementor, one question I have is, "what is the set of operations I...
Repo URL: https://github.com/lucidrains/DALLE2-pytorch SOTA for text-to-image tasks.
We added the `gen_inputs()` interface and we would like to increase its coverage towards 100%. `gen_inputs(num_batches) -> Tuple[Generator, Optional[int]]` returns a generator and an optional int value. The int value...
Since the core model set only measures the iteration time, it assumes the data has been copied from DRAM to GPU memory. However, some of our models still break this...
Adding a very simply sanity check to make sure other changes won't break `bench_lazy.py`