aimet icon indicating copy to clipboard operation
aimet copied to clipboard

quantsim.export(path, filename_prefix) could not generate int8 QNN ONNX model

Open JiliangNi opened this issue 11 months ago • 2 comments

After calling quantsim.export(path, filename_prefix), I could not get int8 QNN ONNX model. My objective is to get an int8 ONNX model through aimet quant toolkit, which shows like the attached image below. int8_ONNX_model

However, by calling quantsim.export(path, filename_prefix), I only can get pth files, encoding files and one fp32 ONNX model. Did I use the export functionality incorrectly? Or is any way to convert encoding files and the fp32 ONNX model to one int8 QNN model?

JiliangNi avatar Feb 26 '24 06:02 JiliangNi

You used it correctly, you can take the encodings and FP32 model to a quantized target to get a quantized model. AIMET only simulates HW performance

quic-mangal avatar Mar 20 '24 00:03 quic-mangal

@JiliangNi please use --keep_quant_nodes option with the qnn converters to see a QNN model with activation quant/dequant nodes. Without this option, quant nodes are stripped from the graph.

quic-akinlawo avatar Mar 20 '24 00:03 quic-akinlawo