quantized.pytorch
quantized.pytorch copied to clipboard
Cheating when applying add operation in ResNet
Hi! It seems to me as a cheat when preforming '+' operation at the end of residual block in quantized ResNet implementation. It requires 16 bit accumulator to get the outputs of sum, also input tensors ought to be quantized. What we get is that this op inputs are 32 bit (res input) and 16 bit (after last qconv) and the result is 32 bit, so, the accuracy doesn't fall at all.