IBN-Net
IBN-Net copied to clipboard
Question about the scale and shift operation in Instance normalization layer.
Hi, I tried to replace the Instance normalization (IN) layer with "MVN_layer + Scale_layer" in caffe as in issue (https://github.com/XingangPan/IBN-Net/issues/4), but found the network hard to converge. When i remove every scale layer following MVN(i.e. use MVN layer only), the network converges. My question is: If i replace IN with MVN layers only in caffe, does it hurt the generalization or transfer ability of IBN-Net, or is the scale layer really important? what makes the net hard to converge when MVN layer is followed by scale layers? thanks again!
@vd001 I may not be able to answer your question since I haven't try IBN-Net without scale layers. In pytorch the scale layer does not interfere convergence. You may have a try and see if the model work well without scale layers. BTW, you may check if the settings of the scale layers are correct. For example, the 'scale' and 'shift' should be initialized with 1 and 0, and they should have proper learning rate.
@vd001 I have encountered the same problem with you, have you solved this problem?