benchmark icon indicating copy to clipboard operation
benchmark copied to clipboard

Add timm and huggingface model suites support

Open xuzhao9 opened this issue 1 year ago • 5 comments

Dynamobench supports extra huggingface and timm models beyond the existing model set in TorchBench. This PR will add support to those models as well, and they can be invoked with run.py or in the group_bench userbenchmarks.

Test plan:

TIMM model example:

$ python run.py convit_base -d cpu -t eval
Running eval method from convit_base on cpu in eager mode with input batch size 64 and precision fp32.
CPU Wall Time per batch: 4419.601 milliseconds
CPU Wall Time:       4419.601 milliseconds
Time to first batch:         2034.6840 ms
CPU Peak Memory:                0.6162 GB
$ python run.py convit_base -d cpu -t train
Running train method from convit_base on cpu in eager mode with input batch size 64 and precision fp32.
CPU Wall Time per batch: 17044.825 milliseconds
CPU Wall Time:       17044.825 milliseconds
Time to first batch:         1616.9790 ms
CPU Peak Memory:                7.3408 GB

Huggingface model example:

python run.py MBartForCausalLM -d cuda -t train
Running train method from MBartForCausalLM on cuda in eager mode with input batch size 4 and precision fp32.
GPU Time per batch:  839.994 milliseconds
CPU Wall Time per batch: 842.323 milliseconds
CPU Wall Time:       842.323 milliseconds
Time to first batch:         5390.2949 ms
GPU 0 Peak Memory:             19.7418 GB
CPU Peak Memory:                0.9121 GB

Fixes https://github.com/pytorch/benchmark/issues/2170

xuzhao9 avatar Mar 14 '24 20:03 xuzhao9

Can we add something interesting here to stress test: https://github.com/pytorch/pytorch/issues/121072 https://github.com/pytorch/pytorch/pull/121324#issuecomment-1998934523

/cc @ezyang @albanD @vfdev-5 @nWEIdia @xuzhao9 @eqy @ptrblck

bhack avatar Mar 15 '24 11:03 bhack

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

facebook-github-bot avatar Mar 15 '24 15:03 facebook-github-bot

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

facebook-github-bot avatar Mar 15 '24 17:03 facebook-github-bot

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

facebook-github-bot avatar Mar 15 '24 20:03 facebook-github-bot

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

facebook-github-bot avatar Mar 15 '24 23:03 facebook-github-bot

looks good, so if i put in extended models, does it still run the normal torchbench models or no? it'd be nice to be able to run each group individually if not.

HDCharles avatar Mar 19 '24 20:03 HDCharles

@HDCharles Yes, it will run both models specified in models: and extended_models: sections. However, currently we will run them as individual models and there is no "grouping" in terms of results. We could add that in a follow-up PR.

xuzhao9 avatar Mar 19 '24 23:03 xuzhao9

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

facebook-github-bot avatar Mar 19 '24 23:03 facebook-github-bot

@xuzhao9 merged this pull request in pytorch/benchmark@2196021e9bc0b72a547121bbf298ae854a85a21a.

facebook-github-bot avatar Mar 20 '24 18:03 facebook-github-bot