Active-Learning-Bayesian-Convolutional-Neural-Networks
Active-Learning-Bayesian-Convolutional-Neural-Networks copied to clipboard
how do you implement BCNN?
In file Active-Learning-Bayesian-Convolutional-Neural-Networks/ConvNets/active_learning/BCNN_cifar10.py , the architecture of the model is still a CNN, rather than a BCNN. Besides, the training method in this file is SGD + momentum, which is also related to CNN other than BCNN (the training method for BCNN should be BBB for example), so how do you implement BCNN by Keras in your experiment?
I think it is the author using the technical "MC dropout" to do the Bayesian inference, which means the if we add dropout after each convolutional layer, then the CNN model is going to be a BCNN. Training this BCNN (minimize the loss, cross entropy for example) has the same effect to minimize the KL-divergence (variational inference) so that we can get an approximate posterior distribution.
any updates from the authors on this issue? i.e. how BCCN is implemented?