DropoutUncertaintyExps
DropoutUncertaintyExps copied to clipboard
How is the test log likelihood calculated?
I don't quite understand the calculation of the log-likelihood
# We compute the test log-likelihood
ll = (logsumexp(-0.5 * self.tau * (y_test[None] - Yt_hat)**2., 0) - np.log(T)
- 0.5*np.log(2*np.pi) + 0.5*np.log(self.tau))
test_ll = np.mean(ll)
why is the logsumexp used? and why are the predictive variances not used?
I tried to calculate the test log likelihood like this:
from scipy.stats import norm
pred_var = np.var(Yt_hat, axis = 0) + 1 / self.tau
ll = []
for i in range(y_test.shape[0]):
ll.append(norm.logpdf(y_test[i][0], MC_pred[i][0], np.sqrt(pred_var[i][0])))
new_test_ll = np.mean(ll)
And it usually generates slightly worse log likelihood. For example, using the concrete dataset, with split id set to 19, the log likelihood given by the original code is -3.17, while the log likelihood given by the above code is -3.25.
Also, from the code of PBS, the log-likelihood is calculated like this:
test_ll = np.mean(-0.5 * np.log(2 * math.pi * (v + v_noise)) - \
0.5 * (y_test - m)**2 / (v + v_noise)
Which seems to differ from the way log likelihood is caluclated in the MC-dropout code.