pytorch-forecasting icon indicating copy to clipboard operation
pytorch-forecasting copied to clipboard

Hyperparameter tuning

Open Libardo1 opened this issue 2 years ago • 1 comments

  • PyTorch-Forecasting version: 0.9.0
  • PyTorch version: 1.10.0+cu102
  • Python version: 3.7.13
  • Operating System: Colab

Expected behavior

I reproduce the code, everything is ok until the hyperparameter tuning I executed code ... in order to ... and expected to get result ... import pickle

from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters

create study

study = optimize_hyperparameters( train_dataloader= train_dataloader, val_dataloader= val_dataloader, model_path="optuna_test", n_trials=200, max_epochs=50, gradient_clip_val_range=(0.01, 1.0), hidden_size_range=(8, 128), hidden_continuous_size_range=(8, 128), attention_head_size_range=(1, 4), learning_rate_range=(0.001, 0.1), dropout_range=(0.1, 0.3), trainer_kwargs=dict(limit_train_batches=30, gpus=1), reduce_on_plateau_patience=4, use_learning_rate_finder= False, )

save study results - also we can resume tuning at a later point in time

with open("test_study.pkl", "wb") as fout: pickle.dump(study, fout)

show best hyperparameters

print(study.best_trial.params)

Actual behavior

However, result was .... I think it has to do with ... because of ...

Code to reproduce the problem

[I 2022-04-07 21:44:32,875] A new study created in memory with name: no-name-42793384-0e2c-4fdf-b0cf-72c707c47e47
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/callback_connector.py:97: LightningDeprecationWarning: Setting `Trainer(progress_bar_refresh_rate=1)` is deprecated in v1.5 and will be removed in v1.7. Please pass `pytorch_lightning.callbacks.progress.TQDMProgressBar` with `refresh_rate` directly to the Trainer's `callbacks` argument instead. Or, to disable the progress bar pass `enable_progress_bar = False` to the Trainer.
  f"Setting `Trainer(progress_bar_refresh_rate={progress_bar_refresh_rate})` is deprecated in v1.5 and"
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
[W 2022-04-07 21:44:32,931] Trial 0 failed because of the following error: TypeError("fit() got an unexpected keyword argument 'train_dataloader'")
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 213, in _run_trial
    value_or_values = func(trial)
  File "/usr/local/lib/python3.7/dist-packages/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py", line 206, in objective
    trainer.fit(model, train_dataloader=train_dataloader, val_dataloaders=val_dataloader)
TypeError: fit() got an unexpected keyword argument 'train_dataloader'
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-54-466d83bada41>](https://localhost:8080/#) in <module>()
     18     trainer_kwargs=dict(limit_train_batches=30, gpus=1),
     19     reduce_on_plateau_patience=4,
---> 20     use_learning_rate_finder= False,
     21 )
     22 

6 frames
[/usr/local/lib/python3.7/dist-packages/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py](https://localhost:8080/#) in objective(trial)
    204 
    205         # fit
--> 206         trainer.fit(model, train_dataloader=train_dataloader, val_dataloaders=val_dataloader)
    207 
    208         # report result

TypeError: fit() got an unexpected keyword argument 'train_dataloader'

Paste the command(s) you ran and the output. Including a link to a colab notebook will speed up issue resolution. If there was a crash, please include the traceback here. The code used to initialize the TimeSeriesDataSet and model should be also included.

Libardo1 avatar Apr 07 '22 22:04 Libardo1

Running into same issue.. I'm thinking this might be an issue with pytorch lightning updating in March to 1.6

https://pytorch-lightning.readthedocs.io/en/stable/generated/CHANGELOG.html

"Removed deprecated Trainer.fit(train_dataloader=), Trainer.validate(val_dataloaders=), and Trainer.test(test_dataloader=) (#10325)"

fixed by using:

trainer.fit( tft, train_dataloader, val_dataloader, )

Forecastlife avatar May 26 '22 20:05 Forecastlife

I realize I'm like 5 months late to the party with this response, but in case anyone else runs across this, the way I fixed this (without having to edit any code) was by specifying the pytorch lightning version in my environment (docker for me) to the following version: pytorch-lightning==1.2.10

YojoNick avatar Sep 22 '22 13:09 YojoNick

Thanks,

On Thu, Sep 22, 2022 at 8:35 AM YojoNick @.***> wrote:

I realize I'm like 5 months late to the party with this response, but in case anyone else runs across this, the way I fixed this (without having to edit any code) was by specifying the pytorch lightning version in my environment (docker for me) to the following version: pytorch-lightning==1.2.10

— Reply to this email directly, view it on GitHub https://github.com/jdb78/pytorch-forecasting/issues/945#issuecomment-1255036357, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABWJCENIT7JZM2CKAUF7R5LV7RODTANCNFSM5S2XFSAQ . You are receiving this because you authored the thread.Message ID: @.***>

Libardo1 avatar Sep 22 '22 21:09 Libardo1