darts
darts copied to clipboard
initialize_encoders() when fit_from_dataset() is called
Is your feature request related to a current problem? Please describe.
I am in a situation where I need more fine-tuned control over the type of dataset that is used during fitting with a TFTModel
. So I run TFTModel.fit_from_dataset
when I want to train my model. However, if I try to do subsequent evaluation (e.g. historical_forecast()
), an error is thrown:
...
│ /env/lib/python3.10/site-packages/darts/utils/historical_forecasts
│ /utils.py:834 in _process_historical_forecast_input
│
│ 831 │
│ 832 │ model._verify_static_covariates(series[0].static_covariates)
│ 833 │
│ ❱ 834 │ if model.encoders.encoding_available:
│ 835 │ │ past_covariates, future_covariates = model.generate_fit_predict_encodings(
│ 836 │ │ │ n=forecast_horizon,
│ 837 │ │ │ series=series,
AttributeError: 'NoneType' object has no attribute 'encoding_available'
This happens because TFTModel.initialize_encoders
was not called, which would normally happen when calling fit
. This error occurs since self.encoders
is not initialized when the model is initialized, regardless of whether encoders are included during initialization.
Describe proposed solution
I think it would be better if the user didn't have to manually specify initializing the encoders (see below) if they choose to use fit_from_dataset
instead of fit
. Instead, perhaps the line reference above should be included inside fit_from_dataset
?
Describe potential alternatives I can solve this manually as a user by running some workflow like:
model = TFTModel(...)
model.encoders = model.initialize_encoders()
model.fit_from_dataset(...)
Additional context I am happy to form a PR if people are interested in pursuing this change.
I had the same issue when using model.fit_from_dataset(training_dataset), then failing in the same error without being able to initializing encoders. But, the suggested manual adding encoder did save my world. many thanks indeed.
This should be fixed now in darts version 0.28.0 :) Was fixed with PR #2261 🚀