benchmark icon indicating copy to clipboard operation
benchmark copied to clipboard

TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.

Results 314 benchmark issues
Sort by recently updated
recently updated
newest added

Previously, the ncu option would exit with code 255 for me: ``` python run_benchmark.py triton --op int4_gemm --metrics ncu_trace ``` For me, adding the python executable path into the command...

cla signed

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #2265 * __->__ #2264 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags:

cla signed

Summary: X-link: https://github.com/pytorch/pytorch/pull/138164 Capture the timing for the remote fx graph cache get and put operations and add them to the logger logging. Reviewed By: oulgen Differential Revision: D64484025

cla signed
fb-exported

I'm running various benchmarks by calling `pytest test_bench.py testname ` but I was wondering if there was an argument/way to adjust the batchsize/specific model args when calling the test from...

The call to `install_diffusers()` in [benchmark/torchbenchmark/canary_models/stable_diffusion_xl/install.py at main · pytorch/benchmark](https://github.com/pytorch/benchmark/blob/main/torchbenchmark/canary_models/stable_diffusion_xl/install.py) should be made before error checking for the token, similarly to how it's done for other models such as [stable_diffusion_unet](https://github.com/pytorch/benchmark/blob/main/torchbenchmark/models/stable_diffusion_unet/install.py)...

Summary: Test Plan: run the following shell script: ``` repro_arr=("resnet50") for m in "${repro_arr[@]}" do for i in {1..5} do python run_benchmark.py torchao --only $m --quantization noquant --performance --inference --bflo\at16...

cla signed

Canary models `sage`, `gcn` and `gat` attempt to install the `pyg_lib` module during their installation process but their requirements files do not include the link for the custom wheel which...

Models `lit_llama`, `lit_llama_generate` and `lit_llama_lora` are failing with `ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'` during the installation process as they are missing...

Since [ncu analyzer](https://github.com/pytorch/benchmark/blob/main/torchbenchmark/_components/ncu/analyzer.py) has been integrated, we can measure the actual memory_traffic and arithmetic_intensity now. We are able to add more metrics. not sure what kind of metrics we should...