modeltime icon indicating copy to clipboard operation
modeltime copied to clipboard

Conf Intervals - qnorm approach

Open mdancho84 opened this issue 4 years ago • 4 comments

Review if the qnorm() approach should be used.

mdancho84 avatar Aug 04 '20 19:08 mdancho84

Would be nice if confidence/prediction intervals increased with h, e.g. using some of the adjustments on standard deviation that Hyndman applies fpp2, 3.5, prediction intervals (maybe having an argument available to specify and default to something conservative).

Would be great to have available some of the bootstrap (/ block bootstrap) approaches Hyndman describes as well. I recently opened tidymodels/parsnip#464 which links to a post I did on bootstrapping prediction intervals in tidymodels on regression problems. I'd thought about adjusting example so could pass in custom resampling object, e.g. so could set-up for time-series resampling schemes for example (but haven't really thought through what this would entail -- also likely need to look more into alternative approaches, e.g. in field of conformal inference, as methodology I walk through is so computationally costly). (Point may be more appropriate in {modeltime.resample} but thought that seemed primarily focused on performance evaluation...).

In your overview video you briefly mention distributional forecasts, I'm interested to read plans on that, if documented somewhere? (Also curious how may compare/contrast to {fable}'s approach of creating separate distribution objects that then can handle reconciliation or aggregation schemes).

I am just diving into {modeltime}, really awesome stuff! (Apologies if missed available documentation somewhere regarding my questions / comments.) (Also realize is a bit of a stretch for this topic though didn't want to spam with you a bunch of new issues -- feel free to let me know if should open separately or elsewhere.)

brshallo avatar Apr 16 '21 16:04 brshallo

Hey thanks for this. I'd like to make some improvements here. Mainly to make it more scalable by ID of the series rather than a global confidence interval.

The existing approach is a prediction interval based on test set error. The docs can certainly be updated to reflect this.

Bandwidth is tight at the moment. So will need assistance on any changes you'd like to see in the near term.

mdancho84 avatar Apr 16 '21 16:04 mdancho84

Thanks, I'll report back in a few weeks (after have had time to familiarize myself more with package). But as a starting point from then, I think I could help with adding a few notes in documentation (per #102 ), as well as attempting:

make it more scalable by ID of the series rather than a global confidence interval.

I assume by ID, you are referring to the index of the series (as opposed to the model id for example) and how uncertainty scales based on steps ahead (per my first point).

brshallo avatar Apr 16 '21 17:04 brshallo

Sounds good. And yes - we should have a way of calculating local confidence intervals and local accuracy. Now that most of our modeling approaches / algorithms accept panel data with time series that has an ID.

mdancho84 avatar Apr 16 '21 17:04 mdancho84

Local confidence intervals and Conformal Prediction intervals are now being tracked in #173

mdancho84 avatar Sep 03 '23 20:09 mdancho84