onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

Quantized ONNX Model Still Has Float32 Input/Output Tensors

Open jenchun-potentialmotors opened this issue 1 year ago • 2 comments

Describe the issue

After quantization, the output ONNX model had faster inference speed and smaller model size, but why are the input and output tensors still float32? I thought it should be uint8 since the output ONNX file is around one fourth of the original size. Also, I tried onnxruntime 1.12.0, 1.13.1, and 1.18.0 and all the results are the same that input and output tensors are all float32.

image

To reproduce

onnxruntime: 1.14.1 torch: 2.3.0 torchvision: 0.18.0

Follow the official example and the results can be reproduced. https://github.com/microsoft/onnxruntime-inference-examples/blob/main/quantization/notebooks/imagenet_v2/mobilenet.ipynb

Urgency

No response

Platform

Linux

OS Version

Ubuntu 22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.14.1

ONNX Runtime API

Python

Architecture

X86

Execution Provider

Default CPU

Execution Provider Library Version

No response

jenchun-potentialmotors avatar Jun 21 '24 18:06 jenchun-potentialmotors

This is the QDQ representation of ONNX model. In order to perform integer-arithmetic only, you have to quantize your model to QOperator representation. For mor detail, follow https://onnxruntime.ai/docs/performance/model-optimizations/quantization.html. I'm curious what is the method you used to quantized your model? Is it Post-quantize training in ONNXRuntime? If yes, you just have to change quant_format from QDQ to QOperator

def quantize_static(
    model_input: Union[str, Path, onnx.ModelProto],
    model_output: Union[str, Path],
    calibration_data_reader: CalibrationDataReader,
    quant_format=QuantFormat.QDQ, # Change this to QuantFormat.QOperator
    op_types_to_quantize=None,
    per_channel=False,
    reduce_range=False,
    activation_type=QuantType.QInt8,
    weight_type=QuantType.QInt8,
    nodes_to_quantize=None,
    nodes_to_exclude=None,
    use_external_data_format=False,
    calibrate_method=CalibrationMethod.MinMax,
    extra_options=None,
):
...

hoangtv2000 avatar Jun 23 '24 05:06 hoangtv2000

@hoangtv2000 Thank you for the comment! The following are the details for my quantization strategy

  • Post-training quantization: Yes
  • Method selection: Static
  • Representation format: QDQ
  • Data type selection: Activations: uint8, Weights: int8 (U8S8)

I tried your suggestion to perform quantization using QOperator. However, the quantized model's input and output remain float32. I also tried different data types such as U8U8, S8S8, and U8S8, but the results were almost identical. Although the quantized model with float32 input/output runs 2-3x as fast as the non-quantized model, I still do not understand why the quantized output is not in int8 format.

Do you have any idea regarding this?

jenchun-potentialmotors avatar Jun 25 '24 14:06 jenchun-potentialmotors

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

github-actions[bot] avatar Jul 25 '24 15:07 github-actions[bot]

I'm facing the same problem, have you solved that?

ChickenSellerRED avatar Aug 23 '24 20:08 ChickenSellerRED

Hi,

I am performing INT16 post training quantization (W16A16) on a model using the ONNX runtime static quantization function. I am using the QDQ format since INT16 is only supported by it. Can someone please explain to me how the quantization happens in the QDQ format? What happens in the hardware at runtime?

"In order to perform integer-arithmetic only, you have to quantize your model to QOperator representation" - does this mean that if I run inference using an onnx runtime inference session, the INT16 model still performs floating point arithmetic? Is there a workaround to force integer arithmetic? I would like to evaluate the impact of INT16 quantization on my model's accuracy.

SachiniW avatar Feb 25 '25 00:02 SachiniW

This issue has been automatically closed as 'not planned' because it has been marked as 'stale' for more than 30 days without activity. If you believe this is still an issue, please feel free to reopen it.

snnn avatar Jun 07 '25 23:06 snnn