Akin Solomon

Results 8 comments of Akin Solomon

Hello @xiexiaozheng, I would like to confirm that your comparison is one to one. Can you clarify this comment "_I found that it was related to one layer, which is...

Hello @superpigforever, batchnorms are optimized out during conversion by folding the encoding values into the preceding conv2d (including depthwise and transpose variants) or fully connected layers. As such, the missing...

@JiliangNi please use --keep_quant_nodes option with the qnn converters to see a QNN model with activation quant/dequant nodes. Without this option, quant nodes are stripped from the graph.

Hello @xiexiaozheng , if you're using per channel quantization for a given op then the quantization range needs to be symmetric i.e. offset should be zero or 2^bw - 1....

@xiexiaozheng Per-channel quantization is only supported for parameters so that behavior is the default. Are you referring totraining as in QAT in AIMET? @quic-mangal can you comment on if per-channel...

@zhuoran-guo Commenting out that line may be a non-issue depending on your model (i.e. it may work fine). In certain cases, such as a one-to-many pytorch to onnx op mapping,...

Hello @xiexiaozheng, per channel quantization is recommended for convolution-like operations to increase the resolution.

@xiexiaozheng Can you let us know what version of SNPE you are using? Are you seeing the mismatched quantization encodings even after using the snpe-dlc-quantize tool with --override_params?