madtoinou

Results 203 comments of madtoinou

In this context, if you use `output_chunk_length=1, lags_future/past_covariates=[-24]`, and set `n=24` when calling `predict()` or `horizon=24` when calling `historical_forecasts()`, you will get the intended behavior. Furthermore, auto-regression will not really...

Correct, the forecast quality might deteriorate when `n > 24` as the model will start to consume its own forecasts to predict the next steps (t0, when predicting t+24). Note...

Hi @carlocav, I like the idea of supporting more types for the "stride" parameter however, it will require a lot of changes in the current logic of `historical_forecasts()`. If time...

Hi @dwolffram, Can you please share a reproducible code snippet? If the model is not probabilistic, it should indeed not be able to generate samples but I might be missing...

I did a bit of investigation and this is the combination of several things; - the optimized historical forecast method is called because `retrain=False` and `forecast_horizon

Because there are still `output_chunk_length` different forecasts, due to the erroneous shape of the input tensor.

Hi @ETTAN93, Does this happen with a specific dataset or with all the dataset you're trying to use with the model? Does reducing the size of the model impact the...

Hi @ETTAN93, I tried your code snippet (with some corrections to make it run) and could not reproduce the issue (with the latest version of master). If the model fails...

Closing this issue, @ETTAN93 feel free to reopen if the problem persists.

Hi @KornkamonGib, Can you try the solution described in #1945 and use `"devices":"auto"`? What about `"strategy":"auto"`? Sharing a minimum reproducible example would made it easier to investigate and propose some...