inference icon indicating copy to clipboard operation
inference copied to clipboard

why did the specified data precision not work for MLPerf inference?

Open Bob123Yang opened this issue 1 year ago • 4 comments

Run Resnet50 without the precision parameter in the docker successfully and get the result (measurements.json) that displays data type is int8.

Then run Resnet50 again with the precision=float16 as below in the docker successfully but still get the data type is int8 from the result (measurements.json).

It seems that the parameter of precision=float16 didn't take effect. How can I run the model in different data precision conveniently on MLPerf?

cm run script --tags=run-mlperf,inference,_r4.1-dev
--model=resnet50
*--precision=float16 * --implementation=nvidia
--framework=tensorrt
--category=edge
--scenario=Offline
--execution_mode=valid
--device=cuda
--division=closed
--rerun
--quiet

$ cat measurements.json 
{
  "starting_weights_filename": "https://zenodo.org/record/2592612/files/resnet50_v1.onnx",
  "retraining": "no",
  "input_data_types": "int8",
  "weight_data_types": "int8",
  "weight_transformations": "no"

Bob123Yang avatar Dec 31 '24 08:12 Bob123Yang

Hi @Bob123Yang for nvidia implementation this is expected behaviour as the precision is automatically chosen by the implementation - often the best one satisfying the accuracy requirement for MLPerf. We don't have a choice to change this.

arjunsuresh avatar Dec 31 '24 20:12 arjunsuresh

Thank you @arjunsuresh , so do you mean non-nvidia implementation have the choice of changing the precision by the paramter for MLPerf?

Bob123Yang avatar Jan 02 '25 00:01 Bob123Yang

You're welcome @Bob123Yang Actually what I told is also true for other vendor implementations like Intel, AMD, Qualcomm etc. Reference implementations usually have fp16 and fp32 options especially for the pytorch models.

arjunsuresh avatar Jan 02 '25 15:01 arjunsuresh

Oh it's a pity, thanks! @arjunsuresh

Could you help confirm one more question for NVIDIA multiple GPU scenario - how to run MLPerf inference on multiple GPUs which are connected with NVLink? Is there any parameter dedicated for that scenario or without any special parameter and just prepare the physical connection (such as NVLINK) working well for the multiple GPUs and then MLPerf running will automatically enable all GPU resources for usage?

Bob123Yang avatar Jan 03 '25 02:01 Bob123Yang