Benchmark reliability of torchbenchmarks
Recently I found that for the same model, the native benchmark code in torchbenchmarks does not give expected time, i.e. one is consistently slower than the other one, or one could be slower by up to 20%, I'm relying on torchao.utils.benchmark_model for now, please help take a look to see what might be the problem.
For details please see: https://github.com/pytorch/benchmark/pull/2519
This seems like this is an issue with model code, our expectation is that repo owners should own model code while our team owns infrastructure.
I think the time variability from run to run is expected when running on a devgpu. TorchBench servers have some special settings to reduce the variability.
I think the time variability from run to run is expected when running on a devgpu. TorchBench servers have some special settings to reduce the variability.
Oh so is this more of an infrastructure thing?
I feel this might be related to benchmarking code, since with the exact same setup, machine etc. torchao.utils.benchmark_model gives stable results