aimet icon indicating copy to clipboard operation
aimet copied to clipboard

Assistance Needed for Saving ONNX Model in QDQ or QOperator Formats

Open tuanbos opened this issue 1 year ago • 5 comments

Hi authors,

Thank you for your excellent work.

Currently, I don't see an option to save the ONNX model after it has been quantized into QDQ or QOperator formats. I am using the ONNX format as input, and the output includes the ONNX format in FP32 as well as encoding information (scale, bias).

Could you please show me how to obtain QDQ or QOperator in ONNX from the FP32 model and the encoding information?

Thank you very much, and I look forward to your response.

tuanbos avatar Aug 01 '24 01:08 tuanbos

Hello @tuanbos,

In aimet onnx export method, you need to set use_embedded_encodings to true to get the onnx with QDQ nodes.

Please note that this feature is support currently for int8 QDQ nodes only.

e-said avatar Aug 01 '24 09:08 e-said

Hi,

Thank you for your response.

I understand that use_embedded_encodings is only available when converting a model from PyTorch and saving the output to ONNX. This feature is implemented in the QuantizationSimModel class for PyTorch, as seen here: QuantizationSimModel for PyTorch.

For ONNX model input, I need to use the QuantizationSimModel for ONNX, which can be found here: QuantizationSimModel for ONNX. However, this class does not yet support exporting to QDQ or QOperation.

Is my understanding correct?

tuanbos avatar Aug 01 '24 10:08 tuanbos

yes, your understanding is correct. My bad I didn't see you are using aimet_onnx

TBH, I don't use aimet_onnx, but in the code it seems there is no option at the moment to generate the model with QDQ nodes. Did you try to put under comment this line ?

I hope that would help

e-said avatar Aug 01 '24 10:08 e-said

Hi,

Yes, we did already but the format is also AIMET format. image

It seems more QDQ but not really ONNX QDQ. Do you have any idea?

tuanbos avatar Aug 01 '24 10:08 tuanbos

yes they are not native QDQ nodes, you can try to do something similar to what they implemented in aimet_torch here

e-said avatar Aug 01 '24 10:08 e-said

Hi @tuanbos , we're now actively working on this capability for both aimet_torch and aimet_onnx export and should have it ready in one of our upcoming releases.

quic-klhsieh avatar Apr 30 '25 21:04 quic-klhsieh