onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

[Performance] The 16-bit quantization QDQ model cannot be accelerated by CUDA

Open duanshengliu opened this issue 7 months ago • 2 comments

Describe the issue

My GPU is V100 CUDA Version: 12.0 or 11.8 CPU is Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz

I tested the performance of A8W8 and A16W16 quantization models on CPU and CUDA respectively. The performance of A16W16 quantization model on CUDA is even worse than that of CPU.

Summary:

Total Inference Time(s)(repeat=100) A8W8 A16W16
CPUExecutionProvider 6.698 s ✔️ 30.961 s ✔️
CUDAExecutionProvider 3.870 s ✔️ 42.365 s

Moreover, The A16W8 or A8W16 quantization models also have the similar issues.

To reproduce

This issue can be reproduced by using the relevant files in performance.zip. The reproduction commands and results are as follows,

cd path/to/performance
python run.py

then you will receive the following results:

mobilenetv2_a8w8.onnx ['CPUExecutionProvider'] Total Inference Time: 6.698 seconds
mobilenetv2_a8w8.onnx ['CUDAExecutionProvider'] Total Inference Time: 3.870 seconds
================================================================================
mobilenetv2_a16w16.onnx ['CPUExecutionProvider'] Total Inference Time: 30.961 seconds
mobilenetv2_a16w16.onnx ['CUDAExecutionProvider'] Total Inference Time: 42.365 seconds
================================================================================

Urgency

Urgent

Platform

Linux

OS Version

Ubuntu 22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA12/CUDA11.8

Model File

No response

Is this a quantized model?

Yes

duanshengliu avatar Jul 24 '24 14:07 duanshengliu