inference icon indicating copy to clipboard operation
inference copied to clipboard

Reference implementations of MLPerf™ inference benchmarks

Results 331 inference issues
Sort by recently updated
recently updated
newest added

During the submission process, the summary CSV that is generated from the https://github.com/mlcommons/inference/blob/master/tools/submission/generate_final_report.py script reports `Nodes` and `a#`, where `Nodes` comes from `number_of_nodes` field (https://github.com/mlcommons/inference/blob/master/tools/submission/generate_final_report.py#L35) and `a#` comes from the...

command: ``` cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1 \ --model=gptj-99 \ --implementation=reference \ --framework=pytorch \ --category=edge \ --scenario=Offline \ --execution_mode=test \ --device=cpu \ --docker --quiet \ --test_query_count=50 ``` error: ``` Encoding Samples...

running: `cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1 --model=llama2-70b-99 --implementation=reference --framework=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=cpu --docker --quiet --test_query_count=50` results in several hours of silence after which this error is produced ``` git clone...

(python3-venv) aarch64_sh ~> cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1 --model=dlrm_v2-99 --implementation=reference --framework=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=cpu --quiet --test_query_count=50 INFO:root:* cm run script "run-mlperf inference _find-performance _full _r4.1" INFO:root: * cm run script...

I followed the [document](https://docs.mlcommons.org/inference/benchmarks/image_classification/resnet50) to inference ResNet50, using MLCommons-Python -> edge -> Tensorflow -> CUDA -> Native The command is ```bash cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1 \ --model=resnet50 \ --implementation=reference \...

AssertionError: Some of the target inference cases were found: {'case_00111', 'case_00400', 'case_00185', 'case_00052', 'case_00065', 'case_00000', 'case_00084', 'case_00076', 'case_00157', 'case_00044', 'case_00005', 'case_00034', 'case_00056', 'case_00171', 'case_00041', 'case_00049', 'case_00078', 'case_00207', 'case_00112', 'case_00169', 'case_00189',...

Hello, when downloading the processed dataset for llama2-70b with rclone: as specified on the file "language/llama2-70b/README.md" on the "get dataset section" I noticed the file "mlperf_log_accuracy.json" within the folder. Is...

In the [mlperf.conf](https://github.com/mlcommons/inference/blob/master/mlperf.conf#L12) we have both `dlrm` and `dlrm-v2` and this is confusing to the submitters as to which one to use. Even though `dlrm-v2` is the expected one, we...

@pgmpablo157321 One of our submission results for singlestream was having wrong result showing in the final table. It should be showing 90-perc latency, but actually showing 97-perc latency. ![image](https://github.com/user-attachments/assets/bcdc1d35-7cfb-46d1-a274-86b65fbcb95c) ![image](https://github.com/user-attachments/assets/350add63-3787-4067-b686-32f2c487cb4d)

The below questions are not documented anywhere AFAIK. It'll be good to clarify these. | Benchmark | Responsible Maintainer | Run support duration | Code improvements welcome? | |---|---|---| ---...