spacetimeformer
spacetimeformer copied to clipboard
Working with custom dataset, IndexError: index out of range in self
Forecaster L2: 1e-06 Linear Window: 0 Linear Shared Weights: False RevIN: False Decomposition: False /home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/nn/decoder.py:43: UserWarning: The implementation of Local Cross Attn with exogenous variables makes an unintuitive assumption about variable order. Please see spacetimeformer_model.nn.decoder.DecoderLayer source code and comments warnings.warn( GlobalSelfAttn: AttentionLayer( (inner_attention): PerformerAttention( (kernel_fn): ReLU() ) (query_projection): Linear(in_features=200, out_features=800, bias=True) (key_projection): Linear(in_features=200, out_features=800, bias=True) (value_projection): Linear(in_features=200, out_features=800, bias=True) (out_projection): Linear(in_features=800, out_features=200, bias=True) (dropout_qkv): Dropout(p=0.0, inplace=False) ) GlobalCrossAttn: AttentionLayer( (inner_attention): PerformerAttention( (kernel_fn): ReLU() ) (query_projection): Linear(in_features=200, out_features=800, bias=True) (key_projection): Linear(in_features=200, out_features=800, bias=True) (value_projection): Linear(in_features=200, out_features=800, bias=True) (out_projection): Linear(in_features=800, out_features=200, bias=True) (dropout_qkv): Dropout(p=0.0, inplace=False) ) LocalSelfAttn: AttentionLayer( (inner_attention): PerformerAttention( (kernel_fn): ReLU() ) (query_projection): Linear(in_features=200, out_features=800, bias=True) (key_projection): Linear(in_features=200, out_features=800, bias=True) (value_projection): Linear(in_features=200, out_features=800, bias=True) (out_projection): Linear(in_features=800, out_features=200, bias=True) (dropout_qkv): Dropout(p=0.0, inplace=False) ) LocalCrossAttn: AttentionLayer( (inner_attention): PerformerAttention( (kernel_fn): ReLU() ) (query_projection): Linear(in_features=200, out_features=800, bias=True) (key_projection): Linear(in_features=200, out_features=800, bias=True) (value_projection): Linear(in_features=200, out_features=800, bias=True) (out_projection): Linear(in_features=800, out_features=200, bias=True) (dropout_qkv): Dropout(p=0.0, inplace=False) ) Using Embedding: spatio-temporal Time Emb Dim: 6 Space Embedding: True Time Embedding: True Val Embedding: True Given Embedding: True Null Value: -1 Pad Value: -1 Reconstruction Dropout: Timesteps 0.05, Standard 0.1, Seq (max len = 5) 0.2, Skip All Drop 1.0 *** Spacetimeformer (v1.5) Summary: *** Model Dim: 200 FF Dim: 800 Enc Layers: 3 Dec Layers: 3 Embed Dropout: 0.2 FF Dropout: 0.3 Attn Out Dropout: 0.0 Attn Matrix Dropout: 0.0 QKV Dropout: 0.0 L2 Coeff: 1e-06 Warmup Steps: 0 Normalization Scheme: batch Attention Time Windows: 1 Shifted Time Windows: False Position Emb Type: abs Recon Loss Imp: 0.0
/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/utilities.py:91: PossibleUserWarning: max_epochs
was not set. Setting it to 1000 epochs. To train without an epoch limit, set max_epochs=-1
.
rank_zero_warn(
GPU available: True, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1823: PossibleUserWarning: GPU available but not used. Set accelerator
and devices
using Trainer(accelerator='gpu', devices=2)
.
rank_zero_warn(
Trainer(limit_val_batches=1.0)
was configured so 100% of the batches will be used..
| Name | Type | Params
0 | spacetimeformer | Spacetimeformer | 13.5 M
13.5 M Trainable params
0 Non-trainable params
13.5 M Total params
54.080 Total estimated model params size (MB)
Sanity Checking DataLoader 0: 0%| | 0/2 [00:01<?, ?it/s]Traceback (most recent call last):
File "train.py", line 181, in
Any help would be appreciated.
can you help me how to use custom datasets?