aw_nas icon indicating copy to clipboard operation
aw_nas copied to clipboard

Should we prevent over-regularization?

Open walkerning opened this issue 5 years ago • 1 comments

As in every one-shot parameter training step, only a subset of parameters are active, especially when mepa_sample_size is small. We by default apply weight decay to all super net's parameters in every training step, is this an "over-regularization" or a desired behavior (which i will refer to "auto-regularization"). When some parameters are not active in any of the sampled architecture, maybe they should not be regularized, at least in the very begining of the training. As this might cause this unsampled path to be under trained, and the architecture that is sampled more is trained even better. This could lead to unsufficient exploration maybe?

However, when the controller is somehow well trained, the less sampled path means it just does not work well in the architecture, and thus the less training and over regularizaiton these paths get is an "auto-regularization" of this super network. (But do we really need this auto-regularization in this super network, as the only usage of the supernetwork is to be an performance indicator of it sub networks.

walkerning avatar May 18 '19 15:05 walkerning

Not so important for now...

walkerning avatar May 21 '19 13:05 walkerning