Bayesian-Neural-Networks icon indicating copy to clipboard operation
Bayesian-Neural-Networks copied to clipboard

[Question][MC dropout] drop hidden units for each data point or for each mini-batch data?

Open alwaysuu opened this issue 3 years ago • 1 comments

Hi JavierAntoran,

Thanks for the wonderful code first, and it is really helpful for me working in the related area. I'd like to consult a question about the MC dropout. In BBB with local reparameterization, the activation values are sampled for each data point instead of directly sampling a weight distribution to reduce the computational complexity. So, in MC dropout, shall we do the similar procedure, e.g. dropout hidden units for each data point in training or testing phase? I notice that your MC dropout model seems uses the same dropout for a mini-batch data and the default batch size is 128. Should I change the batch size to 1 to achieve the goal of dropping hidden units for each data point?

Looking forward to your reply. Thanks a lot

alwaysuu avatar Oct 18 '21 08:10 alwaysuu

Hi @alwaysuu,

Yes, changing the batch size to 1 will result in different weights being used for each input. However, it could make training very slow due to large variance in the estimate of the loss.

Javier

JavierAntoran avatar Aug 16 '22 16:08 JavierAntoran