benchmark
benchmark copied to clipboard
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
We would like to introduce basic distributed benchmarking support on synthetic data. The idea is to wrap up single-GPU model on DDP/FSDP, then get them running on conda_mast. The initial...
Currently we are using our own code for accuracy checking: https://github.com/pytorch/benchmark/blob/main/torchbenchmark/util/env_check.py#L426 We should avoid duplicating the code and directly call the code from the dynamobench for accuracy checking. The accuracy...
Hi @xuzhao9 , during the investigation of LLAMA_7b OOM issue, we found that there are many redundant memory allocation. maybe it's not necessary for test. 1, there is deepcopy for...
Hi I duplicate the llama model and rename it into llama_7b, changed the model parameters according to llama_7b specification, looks like this:  skiped the CPU eager mode, only run...
Hi there is docker file to build the docker image, but no script to run the benchmark test within docker image automatically, I tried many ways to run the test_bench.py...
PyTorch could be easily installed with AMD ROCm. But torchbench has some limitations on ROCm envs. - [x] model installation - [x] model execution - [ ] GPU memory measurement
See: https://github.com/pytorch/pytorch/issues/113063 cross-ref for the issue here, but let's consolidate conversation on the other issue.
(python310) C:\Users\prs\work_projects\pytorch_benchmark>python install.py models phi_1_5 checking packages torch, torchvision, torchaudio are installed...OK running setup for C:\Users\prs\work_projects\pytorch_benchmark\torchbenchmark\models\phi_1_5...Traceback (most recent call last): File "C:\Users\prs\work_projects\pytorch_benchmark\install.py", line 60, in success &= setup(models=args.models, verbose=args.verbose, continue_on_fail=args.continue_on_fail,...
In the past 2-3 weeks, these configs have been bouncing up and down. DALLE2_pytorch, Adam, cuda, amsgrad, maximize DALLE2_pytorch, Adam, cuda, default DALLE2_pytorch, Adam, cuda, foreach DALLE2_pytorch, Adam, cuda, fused,...