benchmark icon indicating copy to clipboard operation
benchmark copied to clipboard

Benchmark reliability of torchbenchmarks

Open jerryzh168 opened this issue 1 year ago • 4 comments

Recently I found that for the same model, the native benchmark code in torchbenchmarks does not give expected time, i.e. one is consistently slower than the other one, or one could be slower by up to 20%, I'm relying on torchao.utils.benchmark_model for now, please help take a look to see what might be the problem.

For details please see: https://github.com/pytorch/benchmark/pull/2519

jerryzh168 avatar Oct 28 '24 22:10 jerryzh168

This seems like this is an issue with model code, our expectation is that repo owners should own model code while our team owns infrastructure.

seemethere avatar Oct 31 '24 21:10 seemethere

I think the time variability from run to run is expected when running on a devgpu. TorchBench servers have some special settings to reduce the variability.

kit1980 avatar Oct 31 '24 21:10 kit1980

I think the time variability from run to run is expected when running on a devgpu. TorchBench servers have some special settings to reduce the variability.

Oh so is this more of an infrastructure thing?

seemethere avatar Oct 31 '24 22:10 seemethere

I feel this might be related to benchmarking code, since with the exact same setup, machine etc. torchao.utils.benchmark_model gives stable results

jerryzh168 avatar Oct 31 '24 22:10 jerryzh168