Kashif Rasul
Kashif Rasul
So recently gluonts refactored this method in their master branch and so in preparation for this change, I also merged in a fix in pytorch-ts' master branch. If you are...
@babaksit yes you can add support for it by making a PR to this https://github.com/awslabs/gluon-ts/blob/master/src/gluonts/time_feature/_base.py#L170
Right so the Input size is the size of the input vector to the model after all the temporal features and covariates have been concated to the target value… depending...
It is both a tunable and non tunable thing, e.g. by adding more lags I can increase the size of the input size or by using or discarding potential covariates...
@julioasotodv I have something for you to test if you like?
thanks! well typically the `loss` is the log-likelihood of the samples... but you are right I do not log it during inference. Let me see if I can add that...
@ZhuangweiKang this is a property of all autoregressive models for example have a look at: https://github.com/zalandoresearch/pytorch-ts/blob/master/pts/model/deepar/deepar_network.py#L361 for the loop over the prediction length where the model samples the next time...
yes i believe so... although it will produce a tensor of shape `[B, S, T, 1]` where `B` is the batch size, `S` is the number of samples from the...
it's implied by the `-1` so that it works for any batch size: https://github.com/zalandoresearch/pytorch-ts/blob/master/pts/model/deepar/deepar_network.py#L407
no I do not think so...