pytorch-forecasting
pytorch-forecasting copied to clipboard
Model Fitting and LR Finder failing with `AttributeError: 'NoneType' object has no attribute 'item'`
- PyTorch-Forecasting version: 0.9.2
- PyTorch version: 1.10.1
- Python version: 3.8.12
- Operating System: macOS Monterey 12.1
Expected behavior
Following the tutorial, I tried creating a TFT with multiple targets and the following specification:
# define dataset
max_encoder_length = 12
max_prediction_length = 24
training_cutoff = "2021-06-01" # day for cutoff
training = TimeSeriesDataSet(
tft_data.loc[tft_data['month'] <= training_cutoff],
time_idx='time_idx',
target= [
'gbv',
'rev_bc',
'rev_ac'
],
group_ids=['market','dealtype'],
min_encoder_length= 3,
max_encoder_length= max_encoder_length,
max_prediction_length=max_prediction_length,
static_categoricals=['market','dealtype'],
time_varying_unknown_reals = [
'take_rate',
'cancel_rate',
'cancel_volume'
],
target_normalizer=MultiNormalizer(
[
GroupNormalizer(
groups=["market", "dealtype"], transformation="softplus"
),
GroupNormalizer(
groups=["market", "dealtype"], transformation="softplus"
),
GroupNormalizer(
groups=["market", "dealtype"], transformation="softplus"
),
]
),
)
# create validation set (predict=True) which means to predict the last max_prediction_length points in time
# for each series
validation = TimeSeriesDataSet.from_dataset(training, tft_data, predict=True, stop_randomization=True)
# create dataloaders for model
batch_size = 128 # set this between 32 to 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=0)
# configure network and trainer
pl.seed_everything(42)
trainer = pl.Trainer(
gpus=0,
# clipping gradients is a hyperparameter and important to prevent divergance
# of the gradient for recurrent neural networks
gradient_clip_val=0.1,
)
tft = TemporalFusionTransformer.from_dataset(
training,
# not meaningful for finding the learning rate but otherwise very important
learning_rate=0.03,
hidden_size=16, # most important hyperparameter apart from learning rate
# number of attention heads. Set to up to 4 for large datasets
attention_head_size=4,
dropout=0.1, # between 0.1 and 0.3 are good values
hidden_continuous_size=8, # set to <= hidden_size
output_size=[7, 7, 7], # 7 quantiles by default
loss=MultiLoss([QuantileLoss(), QuantileLoss(), QuantileLoss()]),
# reduce learning rate if no improvement in validation loss after x epochs
reduce_on_plateau_patience=4,
)
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")
Trying to optimize the learning rate I tried running:
# find optimal learning rate
res = trainer.tuner.lr_find(
tft,
train_dataloader=train_dataloader,
val_dataloaders=val_dataloader,
max_lr=10.0,
min_lr=1e-6,
)
print(f"suggested learning rate: {res.suggestion()}")
fig = res.plot(show=True, suggest=True)
fig.show()
I expected a Learning rate output analogous to the Stallion Tutorial in the docs.
Similarly, for
trainer.fit(
tft,
train_dataloader=train_dataloader,
val_dataloaders=val_dataloader )
I expected the corresponding training progress output .
Actual behavior
However, result was
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/Users/bernhard.kaindl/projects/p_n_l/report.ipynb Cell 54' in <module>
[1](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=0)[ # find optimal learning rate
----> ]()[2](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=1)[ res = trainer.tuner.lr_find(
]()[3](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=2)[ tft,
]()[4](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=3)[ train_dataloader=train_dataloader,
]()[5](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=4)[ val_dataloaders=val_dataloader,
]()[6](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=5)[ max_lr=10.0,
]()[7](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=6)[ min_lr=1e-6,
]()[8](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=7)[ )
]()[10](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=9)[ print(f"suggested learning rate: {res.suggestion()}")
]()[11](vscode-notebook-cell:/Users/bernhard.kaindl/projects/p_n_l/report.ipynb#ch0000058?line=10)[ fig = res.plot(show=True, suggest=True)
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py:185, in Tuner.lr_find(self, model, train_dataloaders, val_dataloaders, datamodule, min_lr, max_lr, num_training, mode, early_stop_threshold, update_attr, train_dataloader)
]()[148](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=147)[ """Enables the user to do a range test of good initial learning rates, to reduce the amount of guesswork in
]()[149](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=148)[ picking a good starting learning rate.
]()[150](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=149)[
(...)
]()[182](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=181)[ or if you are using more than one optimizer.
]()[183](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=182)[ """
]()[184](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=183)[ self.trainer.auto_lr_find = True
--> ]()[185](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=184)[ result = self.trainer.tune(
]()[186](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=185)[ model,
]()[187](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=186)[ train_dataloaders=train_dataloaders,
]()[188](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=187)[ train_dataloader=train_dataloader, # TODO: deprecated - remove with 1.6
]()[189](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=188)[ val_dataloaders=val_dataloaders,
]()[190](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=189)[ datamodule=datamodule,
]()[191](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=190)[ lr_find_kwargs={
]()[192](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=191)[ "min_lr": min_lr,
]()[193](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=192)[ "max_lr": max_lr,
]()[194](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=193)[ "num_training": num_training,
]()[195](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=194)[ "mode": mode,
]()[196](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=195)[ "early_stop_threshold": early_stop_threshold,
]()[197](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=196)[ "update_attr": update_attr,
]()[198](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=197)[ },
]()[199](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=198)[ )
]()[200](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=199)[ self.trainer.auto_lr_find = False
]()[201](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=200)[ return result["lr_find"]
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1092, in Trainer.tune(self, model, train_dataloaders, val_dataloaders, datamodule, scale_batch_size_kwargs, lr_find_kwargs, train_dataloader)
]()[1087](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1086)[ # links data to the trainer
]()[1088](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1087)[ self._data_connector.attach_data(
]()[1089](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1088)[ model, train_dataloaders=train_dataloaders, val_dataloaders=val_dataloaders, datamodule=datamodule
]()[1090](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1089)[ )
-> ]()[1092](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1091)[ result = self.tuner._tune(model, scale_batch_size_kwargs=scale_batch_size_kwargs, lr_find_kwargs=lr_find_kwargs)
]()[1094](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1093)[ assert self.state.stopped
]()[1095](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1094)[ self.tuning = False
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py:53, in Tuner._tune(self, model, scale_batch_size_kwargs, lr_find_kwargs)
]()[51](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=50)[ if self.trainer.auto_lr_find:
]()[52](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=51)[ lr_find_kwargs.setdefault("update_attr", True)
---> ]()[53](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=52)[ result["lr_find"] = lr_find(self.trainer, model, **lr_find_kwargs)
]()[55](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=54)[ self.trainer.state.status = TrainerStatus.FINISHED
]()[57](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=56)[ return result
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py:238, in lr_find(trainer, model, min_lr, max_lr, num_training, mode, early_stop_threshold, update_attr)
]()[235](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=234)[ trainer.init_optimizers = lr_finder._exchange_scheduler(trainer)
]()[237](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=236)[ # Fit, lr & loss logged in callback
--> ]()[238](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=237)[ trainer.tuner._run(model)
]()[240](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=239)[ # Prompt if we stopped early
]()[241](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=240)[ if trainer.global_step != num_training:
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py:63, in Tuner._run(self, *args, **kwargs)
]()[61](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=60)[ self.trainer.state.status = TrainerStatus.RUNNING # last `_run` call might have set it to `FINISHED`
]()[62](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=61)[ self.trainer.training = True
---> ]()[63](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=62)[ self.trainer._run(*args, **kwargs)
]()[64](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py?line=63)[ self.trainer.tuning = True
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1193, in Trainer._run(self, model, ckpt_path)
]()[1190](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1189)[ self.checkpoint_connector.resume_end()
]()[1192](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1191)[ # dispatch `start_training` or `start_evaluating` or `start_predicting`
-> ]()[1193](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1192)[ self._dispatch()
]()[1195](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1194)[ # plugin will finalized fitting (e.g. ddp_spawn will load trained model)
]()[1196](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1195)[ self._post_dispatch()
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1272, in Trainer._dispatch(self)
]()[1270](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1269)[ self.training_type_plugin.start_predicting(self)
]()[1271](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1270)[ else:
-> ]()[1272](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1271)[ self.training_type_plugin.start_training(self)
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py:202, in TrainingTypePlugin.start_training(self, trainer)
]()[200](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py?line=199)[ def start_training(self, trainer: "pl.Trainer") -> None:
]()[201](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py?line=200)[ # double dispatch to initiate the training loop
--> ]()[202](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py?line=201)[ self._results = trainer.run_stage()
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1282, in Trainer.run_stage(self)
]()[1280](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1279)[ if self.predicting:
]()[1281](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1280)[ return self._run_predict()
-> ]()[1282](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1281)[ return self._run_train()
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1312, in Trainer._run_train(self)
]()[1310](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1309)[ self.fit_loop.trainer = self
]()[1311](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1310)[ with torch.autograd.set_detect_anomaly(self._detect_anomaly):
-> ]()[1312](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1311)[ self.fit_loop.run()
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py:145, in Loop.run(self, *args, **kwargs)
]()[143](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=142)[ try:
]()[144](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=143)[ self.on_advance_start(*args, **kwargs)
--> ]()[145](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=144)[ self.advance(*args, **kwargs)
]()[146](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=145)[ self.on_advance_end()
]()[147](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=146)[ self.restarting = False
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py:234, in FitLoop.advance(self)
]()[231](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py?line=230)[ data_fetcher = self.trainer._data_connector.get_profiled_dataloader(dataloader)
]()[233](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py?line=232)[ with self.trainer.profiler.profile("run_training_epoch"):
--> ]()[234](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py?line=233)[ self.epoch_loop.run(data_fetcher)
]()[236](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py?line=235)[ # the global step is manually decreased here due to backwards compatibility with existing loggers
]()[237](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py?line=236)[ # as they expect that the same step is used when logging epoch end metrics even when the batch loop has
]()[238](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py?line=237)[ # finished. this means the attribute does not exactly track the number of optimizer steps applied.
]()[239](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py?line=238)[ # TODO(@carmocca): deprecate and rename so users don't get confused
]()[240](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py?line=239)[ self.global_step -= 1
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py:145, in Loop.run(self, *args, **kwargs)
]()[143](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=142)[ try:
]()[144](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=143)[ self.on_advance_start(*args, **kwargs)
--> ]()[145](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=144)[ self.advance(*args, **kwargs)
]()[146](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=145)[ self.on_advance_end()
]()[147](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py?line=146)[ self.restarting = False
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py:220, in TrainingEpochLoop.advance(self, *args, **kwargs)
]()[214](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py?line=213)[ model_fx = self.trainer.lightning_module.on_train_batch_end
]()[215](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py?line=214)[ extra_kwargs = (
]()[216](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py?line=215)[ {"dataloader_idx": 0}
]()[217](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py?line=216)[ if callable(model_fx) and is_param_in_hook_signature(model_fx, "dataloader_idx", explicit=True)
]()[218](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py?line=217)[ else {}
]()[219](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py?line=218)[ )
--> ]()[220](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py?line=219)[ self.trainer.call_hook("on_train_batch_end", batch_end_outputs, batch, batch_idx, **extra_kwargs)
]()[221](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py?line=220)[ self.trainer.call_hook("on_batch_end")
]()[222](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py?line=221)[ self.trainer.logger_connector.on_batch_end()
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1477, in Trainer.call_hook(self, hook_name, pl_module, *args, **kwargs)
]()[1475](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1474)[ callback_fx = getattr(self, hook_name, None)
]()[1476](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1475)[ if callable(callback_fx):
-> ]()[1477](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1476)[ callback_fx(*args, **kwargs)
]()[1479](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1478)[ # next call hook in lightningModule
]()[1480](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py?line=1479)[ output = None
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py:181, in TrainerCallbackHookMixin.on_train_batch_end(self, outputs, batch, batch_idx, dataloader_idx)
]()[179](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py?line=178)[ callback.on_train_batch_end(self, self.lightning_module, outputs, batch, batch_idx, 0)
]()[180](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py?line=179)[ else:
--> ]()[181](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py?line=180)[ callback.on_train_batch_end(self, self.lightning_module, outputs, batch, batch_idx)
File ~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py:347, in _LRCallback.on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx)
]()[344](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=343)[ if self.progress_bar:
]()[345](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=344)[ self.progress_bar.update()
--> ]()[347](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=346)[ current_loss = trainer.fit_loop.running_loss.last().item()
]()[348](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=347)[ current_step = trainer.global_step
]()[350](file:///~/projects/p_n_l/env/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py?line=349)[ # Avg loss (loss with momentum) + smoothing
AttributeError: 'NoneType' object has no attribute 'item'
Given that a similar errors had been reported that had to do with pytorch-lightning
(eg #132 ), I checked the version I was on (initially 1.5.10
) and tried fixing it to 1.5.0
, the version number last mentioned in the release notes (#758), however to no avail.
I also have the same problem.
Think it is a dependency issue but not sure where the issue is. Look at https://github.com/jdb78/pytorch-forecasting/blob/master/poetry.lock to see the exact dependencies against which the package is tested.
Same problem with the newest released version.
Hi all, I rolled back to pytorch-lightning 1.4.9., 1.5.2, 1.3, and even 1.0.3 mentioned in the previous issue about this and unfortunately none of them helped. Still receiving AttributeError: 'NoneType' object has no attribute 'item'
related to the line: current_loss = trainer.fit_loop.running_loss.last().item()
. Can upload the exact error message and stack trace later today, but I believe it is the same as the above.
Do you have a reproducible example on collab?
I will get one up today.
Edit: While looking into this and trying to get the data to CSV (currently in DB) I played around further with the arguments to TimeSeriesDataSet
and that actually remedied the situation. I had originally only defined a time_idx
and left out the add_relative_time_idx
argument like in the example posted above. Upon adding that in as add_relative_time_idx=True
, the error no longer occurred and the program ran as expected.
This used pytorch-lightning 1.6.0
.