pytorch-forecasting icon indicating copy to clipboard operation
pytorch-forecasting copied to clipboard

Type error when trying run trainer.fit with tft

Open jzicker opened this issue 2 years ago • 15 comments

  • PyTorch-Forecasting version: 1.0
  • PyTorch version: 2.0
  • Python version:
  • Operating System: running on google colab

Expected behavior

I executed code trainer.fit. It used to work and now I get a type error.

Actual behavior

I think it has to do with the april 10 release of pytorch-forecasting moving to pytorch 2.0

Code to reproduce the problem

trainer.fit( tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader )

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-57-127e7bdaac70>](https://localhost:8080/#) in <cell line: 1>()
----> 1 trainer.fit(
      2     tft,
      3     train_dataloaders=train_dataloader,
      4     val_dataloaders=val_dataloader
      5 )

1 frames
[/usr/local/lib/python3.9/dist-packages/pytorch_lightning/utilities/compile.py](https://localhost:8080/#) in _maybe_unwrap_optimized(model)
    123     if isinstance(model, pl.LightningModule):
    124         return model
--> 125     raise TypeError(
    126         f"`model` must be a `LightningModule` or `torch._dynamo.OptimizedModule`, got `{type(model).__qualname__}`"
    127     )

TypeError: `model` must be a `LightningModule` or `torch._dynamo.OptimizedModule`, got `TemporalFusionTransformer`

Paste the command(s) you ran and the output. Including a link to a colab notebook will speed up issue resolution. If there was a crash, please include the traceback here. The code used to initialize the TimeSeriesDataSet and model should be also included.

jzicker avatar Apr 12 '23 03:04 jzicker

same problem!!!!!

MorrisHsieh3059 avatar Apr 12 '23 21:04 MorrisHsieh3059

I can't post all the code or give access to the notebook. But here are relevant portions.

early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=MIN_DELTA, patience=PATIENCE, verbose=False, mode="min") checkpoint_callback = ModelCheckpoint(monitor="val_loss", mode="min") lr_logger = LearningRateMonitor()
logger = TensorBoardLogger("lightning_logs")

trainer = pl.Trainer( max_epochs=MAX_EPOCHS, accelerator='gpu', devices="auto", enable_model_summary=True, gradient_clip_val=GRADIENT_CLIP_VAL, callbacks=[lr_logger, early_stop_callback, checkpoint_callback], logger=logger, )

tft = TemporalFusionTransformer.from_dataset( training, learning_rate=LR, hidden_size=HIDDEN_SIZE, attention_head_size=ATTENTION_HEAD_SIZE, dropout=DROPOUT, hidden_continuous_size=HIDDEN_CONTINUOUS_SIZE, output_size=7, # there are 7 quantiles by default: [0.02, 0.1, 0.25, 0.5, 0.75, 0.9, 0.98] loss=QuantileLoss(), log_interval=10, log_val_interval = 10, reduce_on_plateau_patience=REDUCE_ON_PLATEAU_PATIENCE, lstm_layers = 2 )

jzicker avatar Apr 13 '23 00:04 jzicker

same error!

rmagesh148 avatar Apr 13 '23 00:04 rmagesh148

Same error here

pjwu1997 avatar Apr 13 '23 08:04 pjwu1997

I get the same error with a custom model subclassed from TFT

vikolss avatar Apr 14 '23 13:04 vikolss

Had the same issue but change pytorch_lighting to lightning.pytorch and resolved.

shuya-li-wmg avatar Apr 14 '23 15:04 shuya-li-wmg

still have this problem also with lightning.pytorch

MariaSky7 avatar Apr 14 '23 16:04 MariaSky7

I suggest you to perform the following changes:

import lightning.pytorch as pl # Instead of import pytorch_lightning as pl

# [...]
# Also the way callbacks are imported should be slightly modified.
from lightning.pytorch.callbacks.early_stopping import EarlyStopping
from lightning.pytorch.callbacks import LearningRateMonitor

Everything is up to date in the usage examples section: https://github.com/jdb78/pytorch-forecasting

Smendowski avatar Apr 16 '23 08:04 Smendowski

@Smendowski Thank you so much! you unblocked me

jzicker avatar Apr 16 '23 15:04 jzicker

Hey guys! I'm still having the same issue, what I am doing wrong?

from pytorch_forecasting import TimeSeriesDataSet
import torch
from pytorch_forecasting import Baseline, TemporalFusionTransformer, TimeSeriesDataSet
from pytorch_forecasting.data import GroupNormalizer
from pytorch_forecasting.metrics import MAE, SMAPE, PoissonLoss, QuantileLoss
from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters
import lightning.pytorch as pl
from lightning.pytorch.callbacks.early_stopping import EarlyStopping
from lightning.pytorch.callbacks import LearningRateMonitor
from lightning.pytorch.loggers import TensorBoardLogger
from lightning.pytorch.tuner import Tuner

.
.
.

early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=1, verbose=False, mode="min")
lr_logger = LearningRateMonitor()
trainer = pl.Trainer(
    max_epochs=100,
    accelerator="auto", 
    gradient_clip_val=0.1,
    limit_train_batches=30, 
    callbacks=[lr_logger, early_stop_callback],
    logger=TensorBoardLogger("lightning_logs")
)

tft = TemporalFusionTransformer.from_dataset(
    training,
    hidden_size=32,
    attention_head_size=1,
    dropout=0.1,
    hidden_continuous_size=16,
    loss=QuantileLoss(),
    log_interval=2,
    learning_rate=0.03,
    reduce_on_plateau_patience=4
)

res = Tuner(trainer).lr_find(
    tft, 
    train_dataloaders=train_dataloader, 
    val_dataloaders=val_dataloader, 
    early_stop_threshold=1000.0, 
    max_lr=10.0,
    min_lr=1e-6,
)

This is the error message:

TypeError: `model` must be a `LightningModule`, got `TemporalFusionTransformer`

DevSoftChuck avatar Apr 17 '23 15:04 DevSoftChuck

@DevSoftChuck Your packages may be out of date (torch etc.)?

grosestq avatar Apr 18 '23 07:04 grosestq

I had the same problem. It must have something to do with the breaking changes of torch to version 2.0.0 and pytorch-forecasting to version 1.0.0 (as mentioned by @Smendowski above).

As an intermediate solution, I stuck with pytorch-forecasting==0.10.3 torch==1.13.1, which works fine.

Further adjustments might be necessary for the upgrade. E.g., the "gpus" argument of the trainer is not valid anymore (https://lightning.ai/docs/pytorch/latest/upgrade/from_1_9.html). So when considering the upgrade, one must most likely adapt parts of the code.

MBristle avatar Apr 18 '23 08:04 MBristle

@grosestq I was using an outdated version of torch.

pytorch-forecasting==0.10.3
pytorch-lightning==1.9.4
torch==1.13.1

then I upgraded some packages to the following version and that problem was solved:

pytorch-forecasting==1.0.0
pytorch-lightning==2.0.1.post0
torch==2.0.0

Thanks for your help guys!!

DevSoftChuck avatar Apr 18 '23 14:04 DevSoftChuck

Changing all the lightning.pytorch to pytorch_lightning works for me! Like this:

import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
from pytorch_lightning.loggers import TensorBoardLogger

chaoss16 avatar Apr 26 '23 14:04 chaoss16

@Smendowski Thank you so much! you unblocked me

shihao888 avatar Oct 06 '24 11:10 shihao888

I use the same version and there is the same problem

pytorch-forecasting==1.0.0 pytorch-lightning==2.0.1.post0 torch==2.0.0

TypeError: model must be a LightningModule or torch._dynamo.OptimizedModule, got TemporalFusionTransformer

hagersalehahmed avatar Nov 18 '24 14:11 hagersalehahmed