fabletools icon indicating copy to clipboard operation
fabletools copied to clipboard

Graceful failure when forecasting a mable

Open Fuco1 opened this issue 4 years ago • 2 comments

Currently running forecast on a mable fails if any of the models errors out when being forecasted. Is there a way to either drop it completely or return some "NULL forecast" from such a model?

We run batches of 1000 models which take about 20 minutes to compute and then immediately the whole job fails with an error on just one bad model.

Basically, a feature similar to a bad model train function returning a NULL model instead of killing the entire process.

Fuco1 avatar Dec 08 '20 17:12 Fuco1

Interesting, I'm surprised that a forecast would fail if a model was successful. I think the batch handling of errors can be improved in general, applying the technique consistently to all modelling methods.

mitchelloharawild avatar Dec 10 '20 11:12 mitchelloharawild

I forgot to add that this is not a model from fable but our own we built in-house (thanks for making fabletools so flexible!). So it's quite buggy, but in general if it fails it fails on something like division by 0 or something or other being NaN which 99% of the time means the input time series was deficient and we don't want to bother with it.

Our scope is around 2 million time series forecast for long term planning so if 0.1% fails we don't really care that much, especially if next day the data will be fixed.

In our case it is so much more painful in that the models are training for about 10-30 minutes depending on the selection of series (as I said, we batch them at 1000 per job) but the forecast step only takes about 1 minute, so it's a lot of work wasted.

Other option would be to somehow model + forecast in one step, i.e. instead of doing:

1000x model
1000x forecast

run it like

1000x (model + forecast)

but this would arguably be too much change and currently the pipeline is elegant in that at each step I get something which I can reason about.

Some graceful fail/fallback on a forecast method error would be more than sufficient. Especially when running at scale.

Fuco1 avatar Dec 10 '20 13:12 Fuco1