APoT_Quantization icon indicating copy to clipboard operation
APoT_Quantization copied to clipboard

Weight normalization

Open GendalfSeriyy opened this issue 4 years ago • 4 comments

Hello! I found that without weight normalization, the network ceases to learn, and the loss is equal to nan. Could you please explain why this is happening and how it can be fixed?

GendalfSeriyy avatar May 19 '20 17:05 GendalfSeriyy

Hi,

When we first test our algorithm without weight normalization, we also find that problem. It seems that the gradients of the clipping parameter in several layers will explode suddenly. Then we tried to use small learning rates for the clipping parameter but we found the performance is not good.

Then, we think the problem is the distribution of weights is changing significantly and there are no heuristics that can tell when to increase the LR (to accommodate the shift of the distribution of the weights) or when to decrease the LR (to stabilize the training behavior). Therefore, we come up with a method to normalize weights. Weight normalization is inspired by Batch Normalization in activations, because we find learning the clipping parameter in activations quantization does not have the nan issue.

yhhhli avatar May 20 '20 06:05 yhhhli

Thanks for the answer! We also noticed that when quantizing the first and last layer to the same accuracy as the other layers, the network also does not learn. To be more precise, network trains a certain number of epochs, but then the accuracy drops to 10% and no longer grows. Have you carried out such experiments?

GendalfSeriyy avatar Jun 01 '20 17:06 GendalfSeriyy

I think weight normalization cannot be applied to the last layer because the output of the last layer is the output of the network, without BN to standardize its distribution. For the last layer, maybe you can apply the DoReFa scheme to quantize weights and our APoT quantization to activations.

yhhhli avatar Jun 02 '20 11:06 yhhhli

Thanks for the great work and the clarification on the weight_norm! I want to ask that after applying weight normalization in the real-valued weight, the lr for \alpha should be the same for weight or add some adjustment on the lr and weight_decay on \alpha (like the settings in your commented code (https://github.com/yhhhli/APoT_Quantization/blob/a8181041f71f07cd2cefbfcf0ace05e47bb0c5d0/ImageNet/main.py#L181)?

wu-hai avatar Nov 29 '21 05:11 wu-hai