ampere_model_library icon indicating copy to clipboard operation
ampere_model_library copied to clipboard

AML's goal is to make benchmarking of various AI architectures on Ampere CPUs a pleasurable experience :)

Results 17 ampere_model_library issues
Sort by recently updated
recently updated
newest added

In some models (I found it in resnet50, vgg16, bert_base, roberta, but please check other too) the `disable_jit_freeze` parameter isn't correctly passed to the `run_pytorch_fp` function. It makes tests with...

https://github.com/mlcommons/inference/tree/master/language/bert has a list of compatible BERT models. Plese add last one (Pytorch int8 BERT, based on Huggingface bert-large-uncased model).

enhancement

Elements: * ref accuracy for every precision / framework * instructions on downloading models + datasets, help user get through the process as much as possible * summary / comparison...

documentation

* can we store models ourselves and share links? * looks for dataset wrappers that take care of downloading / unpacking

enhancement

to run: AIO_NUM_THREADS=128 ALPACA_DATASET_PATH=/ampere/aml/alpaca_data.json python run.py -m h2oai/h2o-danube2-1.8b-chat

this is a draft pull request, ignore run1.py file. **example command to run on Ampere:** OPENBLAS_NUM_THREADS=10 AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=128 python run_hf.py -m stabilityai/stable-diffusion-xl-base-1.0 -b 1 --steps 25 **example command to run...