RepVGG
RepVGG copied to clipboard
question about insert BN before QAT
In quantization,“We insert BN after the converted 3x3 conv layers because QAT with torch.quantization requires BN”.
I wonder why QAT must need BN after conv, if we don't have BN,just fuse_modules with conv, relu mode, Right?