e-said
e-said
Hi @escorciav I'm using aimet_torch, and there you have a [method](https://github.com/quic/aimet/blob/d81e59bcdbcf900d2847c6c7a7f498188f9ad745/TrainingExtensions/torch/src/python/aimet_torch/quantsim.py#L1807) to convert aimet custom nodes to torch native QDQ nodes. When I use native QDQ torch nodes and export...
Hi @escorciav I don't have a simple script showing this (my pipeline is quite complexe) but I can share some hints to help you create a script to test this:...
Same question here, any updates on this please ? (Thanks to google translate :-) the question is => "The network I tested was ConvTranspose+ BatchNormalization. But the BN layer did...
Hello @CangHaiQingYue model preparer is highly recommended in aimet, you can find [here ](https://quic.github.io/aimet-pages/releases/latest/api_docs/torch_model_preparer.html) some more info on this API. Didn't get the issue you are facing, but probably partial...
Hi @superpigforever, There are two points I would recommend checking: 1/ BN folding during QAT (using the method fold_all_batch_norms) => this is recommended to ensure consistency between QAT and hardware...
Hi @JiliangNi I'm not aware of Qoperator support in AIMET. However, you can obtain QDQ format in your ONNX using _use_embedded_encodings=true_ with AIMET's ONNX export feature. If you're unfamiliar with...