inference icon indicating copy to clipboard operation
inference copied to clipboard

Reference implementations of MLPerf™ inference benchmarks

Results 200 inference issues
Sort by recently updated
recently updated
newest added

Hi, I am trying to run BERT INT8 with TF backend. However, I don't see TF INT8 model info in below link. https://github.com/mlcommons/inference/tree/master/language/bert Any help on how to run will...

This PR enables one to run bert reference implementation using onnxruntime backend with custom model, dataset and log paths and also supports the usage of Nvidia GPUs with onnxruntime version...

Reflect changes made in #1254

Hi, To get the best performance from system(X86arch), I am running two instances of MLPerf simultaneously through shell script (Each with their own Loadgen and SUT), each instance is bind...

Dear All, We are trying to run inference reference scripts for pytorch DLRM (recommendation) in accuracy mode using mlperf official model (tb00_40M.pt). In order to check accuracy, we are using...

We've been using the standard [`pycocotools`](https://pypi.org/project/pycocotools/) Python package for calculating the Object Detection accuracy since MLPerf Inference v0.5. It used to be OK for SSD-ResNet34 and SSD-MobileNet-v1, but it is...

For https://github.com/mlcommons/submissions_inference_2.1/issues/115

@psyhtest Can you start the conversation with the comments you made in 10/11 meeting?

Is TEST01 worthwhile?