vl-dud
vl-dud
> I am curious how useful it is. In general, I found dropout is not that useful, and L1/L2 regularization seems good enough. Hyperparameter tuning showed that in my case...
I just run `model.predict` many times to get the final prediction with CI: ``` def predict_with_uncertainty(model, x, trial_num=100): predictions = [] for _ in range(trial_num): predictions.append(model.predict(x)) return np.mean(predictions, axis=0), np.std(predictions,...
Yes, I used `DropoutUncertainty`. The above code snippet is more convenient in my case, since I use it with already trained models. In addition, it allows you to set `trial_num`,...
The example works correctly for all strategies, both with TF 1.x and 2.x.
I see, in that case I will close this PR.