DPNs
DPNs copied to clipboard
batch normalization layer
I noticed that in model json files, there are not "moving_mean" and "moving_variance" in BatchNorm layers. Can you explain why? Thx.
@LeonJWH
MXNet does not store information about "moving_mean" and "moving_variance" in the json file. ( see: http://data.dmlc.ml/mxnet/models/imagenet/resnet )
Please ask this question at MXNet repo for more information. Thanks!
Have you merged "moving_mean" and "moving_variance" params into "gamma" and "beta"?
@LeonJWH
No, we didn't merge them into any other params. ( see: forward code )
You can get these raw values by _, _, aux_params = mx.model.load_checkpoint(prefix, epoch)
( see: score.py ), where aux_params
is a dict that contains the value of "moving_mean" and "moving_var" for each BN layer.
ok, fine, i'll check it out soon, thx
@cypw how did you do batch normalization refine after training? Do you have plan to release this code?
@terrychenism
We refined the batch normalization layers as suggested by [1].
In [1], the authors refine the BN layers by computing the average (not moving average) on a sufficiently large training batch after the training procedure ( see: ResNet ). It does require some coding.
To make things easier, in our implementation, we freeze all layers except the BN layers and refine the params in BN layers for one epoch. We use the refined moving values as the final result. I am not sure which strategy is better, but our implementation does not require coding.
-------- [1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
Great idea! top1 accuracy is improved after this procedure? about 1%?
@terrychenism
=_=!! Nope, It only improved the Top-5 by about 0.03%. Besides, it has some negative effect on Top-1 accuracy. ( Actually, the original Top-1 accuracy is a little bit higher than the released accuracy. )
Ok, thanks! I will try this step on resnext.