benchmark
benchmark copied to clipboard
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
Hello, may I ask what tasks will be used for end-to-end testing before the release of the new version of PyTorch? Will the test focus on the consistency of metrics...
Since one operator can have multiple inputs, the average results can be misleading. When the optimized operator is faster than the baseline operator on average, but some quantiles are slower,...
Summary: put an extra `m` by mistake in one of the configs and it is breaking in OSS Reviewed By: plotfi Differential Revision: D64208508
Summary: put an extra `m` by mistake in one of the configs and it is breaking in OSS Reviewed By: plotfi Differential Revision: D64208508
Maybe https://github.com/suo/lintrunner is a good choice.
Works for Roadmap https://github.com/pytorch/benchmark/issues/1293 to increase benchmark coverage. This model implementation is hard-code with CUDA due to the 3rd-party repo dependency which makes that running on the custom devices except...
Import optional Triton kernels FlagGems: https://github.com/FlagOpen/FlagGems. Support softmax and addmm operators. Test plan: ``` $ python run_benchmark.py triton --op addmm --only flaggems,triton_addmm --num-inputs 2 --metrics latency,gbps,tflops (M, N, K) flaggems-gbps...
For PyTorch Releases we execute following benchmarks: https://github.com/pytorch/benchmark/tree/main/userbenchmark/release-test These are the tests that we run: ``` # run mnist mkdir -p "${RESULT_DIR}/mnist" pushd "${EXAMPLES_DIR}/mnist" export LOG_FILE=${RESULT_DIR}/mnist/result.log export MEM_FILE=${RESULT_DIR}/mnist/result_mem.log ${PREFIX} bash...