benchmark
benchmark copied to clipboard
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
There's small nuances in how the dynamo runners benchmark models that can make certain torchbench models fail Some models might be explicitly skipped, others might fail because of some dtype...
TorchBench CI has detected a performance signal or runtime regression. Base PyTorch commit: 806d1a871ddfd2d38e1791489892009feaec8425 Affected PyTorch commit: 3316374d14f2e488b65dfb4d0a2560d8414fa19f Affected Tests: - timm_vision_transformer_large, Adadelta, cuda, (pt2) no_foreach: +482341.57958% - timm_vision_transformer_large, RAdam,...
TorchBench CI has detected a performance signal or runtime regression. Base PyTorch commit: 0200b1106c4fe80ea0884181dc8d649ef6078ea3 Affected PyTorch commit: 806d1a871ddfd2d38e1791489892009feaec8425 Affected Tests: - resnet50, ASGD, cuda, default: +124.02993% - resnet50, ASGD, cuda,...
With a couple of exceptions, we should check that gradient calculation is disabled by default in eval tests.
Related post: https://discuss.pytorch.org/t/struggling-to-get-pytorch-fast-enough-to-use-in-public-competition/186015 Example link: - FastMRI: https://github.com/mlcommons/algorithmic-efficiency/tree/main/algorithmic_efficiency/workloads/fastmri - Criteo DLRMsmall: https://github.com/mlcommons/algorithmic-efficiency/tree/main/algorithmic_efficiency/workloads/criteo1tb - WMT Transformer: https://github.com/mlcommons/algorithmic-efficiency/tree/main/algorithmic_efficiency/workloads/wmt
Updated docs.
Add HF model gpt-j-6b into torchbench Work for roadmap #1293 ```shell $ python run.py hf_GPTJ --torchdynamo inductor Running eval method from hf_GPTJ on cpu in dynamo inductor mode with input...
This is just a tracking issue to make sure we don't forget cc. @msaroufim