inference
                                
                                
                                
                                    inference copied to clipboard
                            
                            
                            
                        CM error: no scripts were found with above tags and variations, follow the new docs site
(python3-venv) aarch64_sh ~> cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1 --model=dlrm_v2-99 --implementation=reference --framework=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=cpu --quiet --test_query_count=50 INFO:root:* cm run script "run-mlperf inference _find-performance _full _r4.1" INFO:root: * cm run script "detect os" INFO:root: ! cd /home/ubuntu INFO:root: ! call /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh INFO:root: ! call "postprocess" from /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py INFO:root: * cm run script "detect cpu" INFO:root: * cm run script "detect os" INFO:root: ! cd /home/ubuntu INFO:root: ! call /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh INFO:root: ! call "postprocess" from /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py INFO:root: ! cd /home/ubuntu INFO:root: ! call /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-cpu/run.sh from tmp-run.sh INFO:root: ! call "postprocess" from /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-cpu/customize.py INFO:root: * cm run script "get python3" INFO:root: ! load /home/ubuntu/CM/repos/local/cache/a30274b4c59046f8/cm-cached-state.json INFO:root:Path to Python: /home/ubuntu/CM/repos/local/cache/8ff2b68847874923/mlperf/bin/python3 INFO:root:Python version: 3.10.12 INFO:root: * cm run script "get mlcommons inference src" INFO:root: ! load /home/ubuntu/CM/repos/local/cache/181aac323a064657/cm-cached-state.json INFO:root: * cm run script "get sut description" INFO:root: * cm run script "detect os" INFO:root: ! cd /home/ubuntu INFO:root: ! call /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh INFO:root: ! call "postprocess" from /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py INFO:root: * cm run script "detect cpu" INFO:root: * cm run script "detect os" INFO:root: ! cd /home/ubuntu INFO:root: ! call /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh INFO:root: ! call "postprocess" from /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py INFO:root: ! cd /home/ubuntu INFO:root: ! call /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-cpu/run.sh from tmp-run.sh INFO:root: ! call "postprocess" from /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/detect-cpu/customize.py INFO:root: * cm run script "get python3" INFO:root: ! load /home/ubuntu/CM/repos/local/cache/a30274b4c59046f8/cm-cached-state.json INFO:root:Path to Python: /home/ubuntu/CM/repos/local/cache/8ff2b68847874923/mlperf/bin/python3 INFO:root:Python version: 3.10.12 INFO:root: * cm run script "get compiler" INFO:root: ! load /home/ubuntu/CM/repos/local/cache/ad4709d27e2746f6/cm-cached-state.json INFO:root: * cm run script "get generic-python-lib _package.dmiparser" INFO:root: ! load /home/ubuntu/CM/repos/local/cache/487bb3df259949b6/cm-cached-state.json INFO:root: * cm run script "get cache dir _name.mlperf-inference-sut-descriptions" INFO:root: ! load /home/ubuntu/CM/repos/local/cache/a1971a4c4e324cc2/cm-cached-state.json Generating SUT description file for cfe40b4a2122-pytorch INFO:root: ! call "postprocess" from /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/get-mlperf-inference-sut-description/customize.py INFO:root: * cm run script "get mlperf inference results dir" INFO:root: ! load /home/ubuntu/CM/repos/local/cache/5a5d8a736e15489b/cm-cached-state.json INFO:root: * cm run script "install pip-package for-cmind-python _package.tabulate" INFO:root: ! load /home/ubuntu/CM/repos/local/cache/ffdaabd53c414be8/cm-cached-state.json INFO:root: * cm run script "get mlperf inference utils" INFO:root: * cm run script "get mlperf inference src" INFO:root: ! load /home/ubuntu/CM/repos/local/cache/181aac323a064657/cm-cached-state.json INFO:root: ! call "postprocess" from /home/ubuntu/CM/repos/mlcommons@cm4mlops/script/get-mlperf-inference-utils/customize.py Using MLCommons Inference source from /home/ubuntu/CM/repos/local/cache/0ab0359edada429b/inference
Running loadgen scenario: Offline and mode: performance INFO:root:* cm run script "app mlperf inference generic _reference _dlrm_v2-99 _pytorch _cpu _test _r4.1_default _offline"
CM error: no scripts were found with above tags and variations
variation tags ['reference', 'dlrm_v2-99', 'pytorch', 'cpu', 'test', 'r4.1_default', 'offline'] are not matching for the found script app-mlperf-inference with variations dict_keys(['cpp', 'mil', 'mlcommons-cpp', 'ctuning-cpp-tflite', 'tflite-cpp', 'reference', 'python', 'nvidia', 'mlcommons-python', 'reference,gptj_', 'reference,sdxl_', 'reference,dlrm-v2_', 'reference,llama2-70b_', 'reference,mixtral-8x7b', 'reference,resnet50', 'reference,retinanet', 'reference,bert_', 'nvidia-original,r4.1-dev_default', 'nvidia-original,r4.1-dev_default,gptj_', 'nvidia-original,r4.1_default', 'nvidia-original,r4.1_default,gptj_', 'nvidia-original,r4.1-dev_default,llama2-70b_', 'nvidia-original,r4.1_default,llama2-70b_', 'nvidia-original', 'intel', 'intel-original', 'intel-original,gptj_', 'redhat', 'qualcomm', 'kilt', 'kilt,qaic,resnet50', 'kilt,qaic,retinanet', 'kilt,qaic,bert-99', 'kilt,qaic,bert-99.9', 'intel-original,resnet50', 'intel-original,retinanet', 'intel-original,bert-99', 'intel-original,bert-99.9', 'intel-original,gptj-99', 'intel-original,gptj-99.9', 'resnet50', 'retinanet', '3d-unet-99', '3d-unet-99.9', '3d-unet_', 'sdxl', 'llama2-70b_', 'llama2-70b-99', 'llama2-70b-99.9', 'mixtral-8x7b', 'rnnt', 'rnnt,reference', 'gptj-99', 'gptj-99.9', 'gptj', 'gptj_', 'bert_', 'bert-99', 'bert-99.9', 'dlrm_', 'dlrm-v2-99', 'dlrm-v2-99.9', 'dlrm_,nvidia', 'mobilenet', 'efficientnet', 'onnxruntime', 'tensorrt', 'tf', 'pytorch', 'openshift', 'ncnn', 'deepsparse', 'tflite', 'glow', 'tvm-onnx', 'tvm-pytorch', 'tvm-tflite', 'ray', 'cpu', 'cuda,reference', 'cuda', 'rocm', 'qaic', 'tpu', 'fast', 'test', 'valid,retinanet', 'valid', 'quantized', 'fp32', 'float32', 'float16', 'bfloat16', 'int4', 'int8', 'uint8', 'offline', 'multistream', 'singlestream', 'server', 'power', 'batch_size.#', 'r2.1_default', 'r3.0_default', 'r3.1_default', 'r4.0-dev_default', 'r4.0_default', 'r4.1-dev_default', 'r4.1_default']) !