benchmark icon indicating copy to clipboard operation
benchmark copied to clipboard

TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.

Results 259 benchmark issues
Sort by recently updated
recently updated
newest added

Stack from [ghstack](https://github.com/ezyang/ghstack): * __->__ #870 * #801 Currently failing NNC https://github.com/pytorch/pytorch/issues/75925. Once it is fixed in NNC, we can re-enable it in the microbenchmarks suite

cla signed

moved from original issue in https://github.com/pytorch/hub/issues/148 on behalf of @zdevito I am working on python packaging for PyTorch and just used the benchmark models to verify that the packaging approach...

# MLCube integration [MLCube](https://github.com/mlcommons/mlcube) is a consistent interface to machine learning models in containers like Docker. In this PR the best practices working group at [MLCommons](https://mlcommons.org/en/) presents an MLCube integration...

cla signed

Updates to newer torchtext APIs for pytorch/text#1443

cla signed

RTX 3070ti 8gb NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 `pytest test_bench.py --ignore_machine_config` Is 8gb too small for hobbyist? Do we need to add the following to the script...

Since, we call get_module in each of the unit tests: train, eval, example, and check_device, the unit tests are skipped when get_module() is Not Implemented. This is because we catch...

cla signed

PROFILE_MODEL: timm_regnet

cla signed

After PR #526 lands, we need to fix these: ``` # FIXME: Models will use context "with torch.no_grad():", so the lifetime of no_grad will end after the eval(). # FIXME:...

While this is a large PR according to SLOC, most of the changes are mechanical. In order to collect traces, we need to do a bit of standardization: 1) Standardize...

cla signed

Just a sanity check to confirm if pretrained quantized models can be benchmarked this way. Thanks!

cla signed