forest-confidence-interval
forest-confidence-interval copied to clipboard
Benchmarking confidence intervals
For my dataset, I tried correlating the CIs to absolute error on the test set, and didn't find a relationship. I do get a relationship if I use the standard deviation of the predictions from individual decision trees. Do you see this with other datasets?
You raised a good point: in a noisy dataset (i.e., where a replicated sample could lead to different values for the target) a good model should have the IJK error correlate with the true error, or having the errorbars in the parity plot touch the bisecting line.
In my experience with the IJK, this is quite the case for the prediction&error on the training set, but this is not the case for the test set unless I have a very good model: fitted on a well representative train set, and not overfitting. If it is hard for a ML model to guess the average value of the target in the prediction, it is intrinsically harder to get a good estimate of the variance, since you should have enough training samples to make the model learn the random noise of the dataset.
Therefore, I'm asking: can you please share your dataset example? How confident are you in your model? I feel like it is more likely that you don't have enough samples to have a good model to extract a reliable standard deviation on the test set with the IJK, than proposing that the IJK is making a wrong estimate of the error.
Thanks for your contribution, I'm very interesting to go deeper in assessing the power and limits of IJK by practical examples from users!