aimet
aimet copied to clipboard
Operator combination of model structures
Hello,
I am using AIMET for QAT, but when I use fold_all_batch_norms
, I find that there is a lot of loss before and after the fold.
I tried using . /Examples/torch/quantization/qat.ipynb
when I tried it, I found that fold_all_batch_norms
also had differences, but the results were acceptable.
Both models are constructed with connv , BatchNorm, ReLU, and residual networks. I would like to ask if there is a guide to use the matching of the operators. Because I suspect that there might be a problem with the pairing somewhere that is causing the problem.