ConcreteDropout icon indicating copy to clipboard operation
ConcreteDropout copied to clipboard

Confusion about initialization in bigger nets

Open koenigpeter opened this issue 5 years ago • 2 comments

Hey, I'm trying out concrete dropout with bigger nets (namely DenseNet121 and ResNet18) and for that tried to port the Keras implementation for spatial concrete dropout to PyTorch. Since it works for DenseNet121 (model converges) but strangely not for ResNet18, I was wondering, if maybe the initialization I used was wrong. For both weight_regularizer and dropout_regularizer I used the initialization given in the MNIST example of the spatial concrete dropout Keras implementation (both dependent by division on the train dataset length). However when looking at the paper, you seem to have used 0.01 x N x H x W for the dropout regularizer when using bigger models, but this multiplication would lead to a much much bigger factor than the 2. / N specified in the example. What kind of initialization is right? I would greatly appreciate if you could clear up my confusion! Cheers!

koenigpeter avatar Jun 26 '19 09:06 koenigpeter

Hi ! I agree and I am confuse for the same reasons. I read the paper and I did not understand how the weight regularizer and dropout regularizer are initialized. Could you please tell us what means prior length scale ? and which value to assign to this variable ?

axel971 avatar Feb 12 '21 19:02 axel971

I am also confused about this

JFagin avatar Feb 03 '22 05:02 JFagin