nn-quantization-pytorch icon indicating copy to clipboard operation
nn-quantization-pytorch copied to clipboard

Accuracy drop when batch-norm folding is enabled

Open NilakshanKunananthaseelan opened this issue 4 years ago • 0 comments

I'm trying to generate scales for resnet18 with 5 bits for weights and 7 bits for activation. When bn_folding is enabled there was a considerable drop in the accuracy. I have been using the ImageNet validation set for LAPQ optimization.

python ../lapq/lapq_v2.py -a resnet18 -b 512 --dataset imagenet --datapath ../../../data/imagenet --pretrained --min_method Powell -maxi 1 -maxf 1 -cs 512 -exp lapq_v2 -ba 7 -bw 5

With this parameter setting, I was able to get 65% accuracy.

python ../lapq/lapq_v2.py -a resnet18 -b 512 --dataset imagenet --datapath ../../../data/imagenet --pretrained --min_method Powell -maxi 1 -maxf 1 -cs 512 -exp lapq_v2 -ba 7 -bw 5 -bn

With BN-FOLDED, I got 61%. I thought BN would help in faster training with the nearly the same accuracy, I have gone through absorb_bn function from utils does the correct job. Does accuracy drop relate to the resnet model(custom resnet model has additional relu layer after downsampling which I gues did not influence bn folding), or the dataset?