bayesian-neural-network-blogpost icon indicating copy to clipboard operation
bayesian-neural-network-blogpost copied to clipboard

should we sample when modeling aleatoric uncertainty

Open ShellingFord221 opened this issue 4 years ago • 0 comments

Hi, in What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision, it seems that when modeling aleatoric uncertainty, dropout is disabled, the model is just a normal neural network which predicts mean and variance of the input. Please see Section 2.2:

Note that here, unlike the above, variational inference is not performed over the weights, but instead we perform MAP inference – finding a single value for the model parameters θ. This approach does not capture epistemic model uncertainty, as epistemic uncertainty is a property of the model and not of the data.

Otherwise, modeling aleatoric uncertainty will be the same as modeling both two kinds of uncertainties.

ShellingFord221 avatar Dec 21 '19 14:12 ShellingFord221