neuralforecast
neuralforecast copied to clipboard
[TimesNet] supported exogenous
What happened + What you expected to happen
[TimesNet] The code below shows that TimesNet(timesnet.py) only support future exogenous.
if self.stat_input_size > 0:
raise Exception("TimesNet does not support static variables yet")
if self.hist_input_size > 0:
raise Exception("TimesNet does not support historical variables yet")
If my understanding is correct, the code below(timesnet.py) implies future exogenous is actually history exogenous
# Parse inputs
insample_y = insample_y.unsqueeze(-1) # [Ws,L,1]
if self.futr_input_size > 0:
x_mark_enc = futr_exog[:, : self.input_size, :]
else:
x_mark_enc = None
And the original paper didn't state that the future features is supported.
Versions / Dependencies
None
Reproduction script
None
Issue Severity
None
Hi @WenjuanOlympus! Sorry for the late reply.
The original implementation has future exogenous covariates, is part of their DataEmbedding
(https://github.com/thuml/Time-Series-Library/blob/main/layers/Embed.py#L109). They parse the exogenous covariates with a linear layer (TimeFeatureEmbedding
). We have the same support in our library.
Regarding the terminology, we call future
variables those with known values in the forecast horizon (the most common example is calendar features like holidays). The future values for the historical features are not known. All models use both the past values and future values of the future
variables to add more information :).
This issue has been automatically closed because it has been awaiting a response for too long. When you have time to to work with the maintainers to resolve this issue, please post a new comment and it will be re-opened. If the issue has been locked for editing by the time you return to it, please open a new issue and reference this one.
@cchallu sorry for the late reply.
I still believe that in timesnet "future covariates" is actually "historical covariates". Below are the reason:
Start from line 302 in timesnet.py:
# embedding
enc_out = self.enc_embedding(insample_y, x_mark_enc)
enc_out = self.predict_linear(enc_out.permute(0, 2, 1)).permute(
0, 2, 1
) # align temporal dimension
here
x_mark_enc = futr_exog[:, : self.input_size, :]
and in line 284
`self.predict_linear = nn.Linear(self.input_size, self.h + self.input_size)`
This means that the 'future covariates' is truncated to input size and the linear layer maps a vector with input_size to input_size + forecast horizon. That is to say that the future of 'future covariates' is not known in this case.
@WenjuanOlympus I think you are right that in our implementation we only use the future_covariates for the input_size. However, I don't think this is necessarily problematic.