benchmark icon indicating copy to clipboard operation
benchmark copied to clipboard

TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.

Results 259 benchmark issues
Sort by recently updated
recently updated
newest added

One idea is after the training optimization, we run an inference test to test whether the model gives the same result before and after the train optimization

We should allow users to specify the `base_args` which sets the baseline of the arguments. And `args` for runs whose results are comparable.

https://github.com/huggingface/diffusers

1. output detailed debugging information if the test failed. 2. write test to guarantee `run.py` and `run_sweep.py` generate the same result

In this issue, we maintain a list heavy-weight models which we disable CPU tests, because they are too slow (>60s): # Core Models - [ ] [fambench_xlmr](https://github.com/pytorch/benchmark/tree/main/torchbenchmark/models/fambench_xlmr) - [x] [timm_efficientdet](https://github.com/pytorch/benchmark/tree/main/torchbenchmark/models/timm_efficientdet),...

I've noticed that in the past few months the user experience for using TorchBench for correctness testing [TorchDynamo](https://github.com/facebookresearch/torchdynamo) has gotten much worse. TorchBench used to run quickly, and I could...

_This PR does not mean the final form of torchbench code changes. I think it's rather a discussion on how we should implement a sync-free cuda event timing mechanism._ This...

cla signed