ShellingFord221

Results 28 comments of ShellingFord221

But if so, then all the neural networks with just dropout layers can be called Bayesian neural networks? I don't think so!

As @damienlancry say, the experiment in section 5.2 is a comparison of CNN with dropout layers and CNN without dropout layers. But dropout layer is very common in recent models,...

Thanks! So in a word, a BCNN is a CNN using dropout layers at test time, but the same as common CNN when training?

I have another question... In MC_Dropout_Keras/Dropout_Bald_Q10_N1000_Paper.py for example, the Bayesian part I think is shown in line 228: dropout_score = model.predict_stochastic(X_Pool_Dropout,batch_size=batch_size, verbose=1) , which measures the uncertainties. But I didn't...

@damienlancry Thank you so much for your kindly help! I still wonder that if there is only 1 new function, is it necessary to rewrite the whole keras? I don't...

Besides, is there any Pytorch version of BCNN? I think it will be a little bit easy in Pytorch to implement MC dropout when testing.

As for the output of the model, in epistemic-uncertainty/model.py, it seems that when testing, you compute the variance of `self.test_trials` outputs as the epistemic uncertainty, then you get the output...

Sorry I'm confused. In _What Uncertainty Do We Need in Bayesian Deep Learning for Computer Vision_, it says that > Dropout variational inference ... is done by training a model...

emmm... let's talk about 'when testing should we use dropout and average or not ' in another way. In combined model (also when testing), you disable dropout and get `tf.exp(self.log_var2)`...