aimet icon indicating copy to clipboard operation
aimet copied to clipboard

How to align with pytorch quantization .onnx models?

Open 1171000410 opened this issue 11 months ago • 3 comments

Hello, I did quantitative training using pytorch and exported the onnx model in the following way. I got a .onnx model. model=torchvision.models.quantization.resnet18(quantize=True) torch.export() In the meantime, I use the quantization of aimet and export the onnx model by this way: https://github.com/quic/aimet/blob/develop/Examples/torch/quantization/adaround.ipynb

After that, I got a .pth model, an .encoding file and a .onnx model.

After visualising the .onnx models using the Netron tool, I found that the structure of the two onnx models is not the same. The model exported by aimet, is missing the quantize/dequantize node.

However I need to deploy my model on the same chip, is there any way to align the model exported by aimet, with the model exported directly by pytorch? That is, adding quantize/dequantize nodes.

Thanks!

1171000410 avatar Jul 26 '23 16:07 1171000410

imageimage

The front one is resnet18_torch_quant.onnx,and the behind one is resnet18_after_adaround.onnx.

1171000410 avatar Jul 27 '23 00:07 1171000410

@1171000410 just to get some clarification- Are you trying to load PT computed encodings to AIMET?

And if you just want to export from AIMET with quantization nodes you can use-

save_model_with_embedded_quantization_nodes

quic-mangal avatar Jul 28 '23 18:07 quic-mangal

@1171000410 just to get some clarification- Are you trying to load PT computed encodings to AIMET?

And if you just want to export from AIMET with quantization nodes you can use-

save_model_with_embedded_quantization_nodes

Is there any way to generate onnx+encoding using onnx generated by save_model_with_embedded_quantization_nodes ?

hcqylymzc avatar Aug 24 '23 06:08 hcqylymzc