temporal_fusion_transformer_pytorch icon indicating copy to clipboard operation
temporal_fusion_transformer_pytorch copied to clipboard

Got an error while running trainer = pl.Trainer ?

Open Xanyv opened this issue 4 years ago • 3 comments

when I run:

trainer = pl.Trainer(max_nb_epochs = tft.num_epochs, gpus = 1, track_grad_norm = 2, gradient_clip_val = tft.max_gradient_norm, early_stop_callback = early_stop_callback, #train_percent_check = 0.01, #val_percent_check = 0.01, #test_percent_check = 0.01, overfit_pct=0.01, #fast_dev_run=True, profiler=True, #print_nan_grads = True, #distributed_backend='dp' )
trainer.fit(tft)

in training_tft.ipynb, it raise error below:

TypeError Traceback (most recent call last) in ----> 1 trainer = pl.Trainer(max_nb_epochs = tft.num_epochs, 2 gpus = 1, 3 track_grad_norm = 2, 4 gradient_clip_val = tft.max_gradient_norm, 5 early_stop_callback = early_stop_callback,

TypeError: init() got an unexpected keyword argument 'max_nb_epochs'

Would appreciate a lot if anyone can help with this bug

Xanyv avatar Oct 12 '20 12:10 Xanyv

Hello, this code uses an older version of pytorch lightning, which uses the attribute max_nb_epochs. I recommend you visit pytorch forecasting, it took the temporal fusion transformer from this repo and has an updated implementation. https://github.com/jdb78/pytorch-forecasting/tree/master

dehoyosb avatar Oct 13 '20 14:10 dehoyosb

@Xanyv use max_epochs instead, also after solving this you'll get an error on 'early_stop_callback' there, just use callbacks

SaeedArisha avatar May 14 '21 08:05 SaeedArisha

max_nb_epochs-->max_epochs early_stop_callback-->callbacks profiler=True-->profiler="advance"

KatrinaJK avatar Jul 09 '21 05:07 KatrinaJK