IBN-Net icon indicating copy to clipboard operation
IBN-Net copied to clipboard

Question about the scale and shift operation in Instance normalization layer.

Open vd001 opened this issue 6 years ago • 2 comments

Hi, I tried to replace the Instance normalization (IN) layer with "MVN_layer + Scale_layer" in caffe as in issue (https://github.com/XingangPan/IBN-Net/issues/4), but found the network hard to converge. When i remove every scale layer following MVN(i.e. use MVN layer only), the network converges. My question is: If i replace IN with MVN layers only in caffe, does it hurt the generalization or transfer ability of IBN-Net, or is the scale layer really important? what makes the net hard to converge when MVN layer is followed by scale layers? thanks again!

vd001 avatar Aug 31 '18 03:08 vd001

@vd001 I may not be able to answer your question since I haven't try IBN-Net without scale layers. In pytorch the scale layer does not interfere convergence. You may have a try and see if the model work well without scale layers. BTW, you may check if the settings of the scale layers are correct. For example, the 'scale' and 'shift' should be initialized with 1 and 0, and they should have proper learning rate.

XingangPan avatar Aug 31 '18 04:08 XingangPan

@vd001 I have encountered the same problem with you, have you solved this problem?

lihui52 avatar Feb 19 '19 03:02 lihui52