pytorch-lightning
pytorch-lightning copied to clipboard
Wandb AttributeError when using Wandb with PyTorch Lightning Trainer
🐛 Bug
I followed the official instructions that show how to use Wandb Logger with Pytorch Lightning.
I pip installed wandb and all PyTorch related modules. I logged into Wandb and then I do the following sequence of steps:
import from pytorch_lightning.loggers import WandbLogger
import wandb
import pytorch_lightning as pl
wandb_logger = WandbLogger(project="my_project")
trainer = Trainer(logger=wandb_logger)
tft = TemporalFusionTransformer.from_dataset(...)
trainer.fit(tft, ...)
but I get an error
AttributeError: 'Run' object has no attribute 'add_figure'
This error only appears when I use wandb_logger
and it appears when I run trainer.fit()
.
Expected behavior
Log metrics and show in Wandb.
Environment
- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
- PyTorch Lightning Version (e.g., 1.5.0): 1.6.3
- PyTorch-Forecasting Version (e.g., 1.10): 0.10.3
- Python version (e.g., 3.9): 3.10.6
- OS (e.g., Linux): WSL2
- GPU models and configuration: Nvidia GPU
- How you installed PyTorch (
conda
,pip
, source): poetry add - If compiling from source, the output of
torch.__config__.show()
: poetry - Running environment of LightningApp (e.g. local, cloud): local
Additional context
cc @awaelchli @morganmcg1 @borisdayma @scottire @manangoel99
Can you share the complete error stacktrace?
Can you share the complete error stacktrace?
Here it is:
{
"name": "AttributeError",
"message": "'Run' object has no attribute 'add_figure'",
"stack": "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)\nCell \u001b[0;32mIn [22], line 2\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;66;03m# fit network\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m \u001b[43mtrainer\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfit\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 3\u001b[0m \u001b[43m \u001b[49m\u001b[43mtft\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 4\u001b[0m \u001b[43m \u001b[49m\u001b[43mtrain_dataloaders\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtrain_dataloader\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 5\u001b[0m \u001b[43m \u001b[49m\u001b[43mval_dataloaders\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mval_dataloader\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 6\u001b[0m \u001b[43m)\u001b[49m\n\u001b[1;32m 7\u001b[0m wandb\u001b[38;5;241m.\u001b[39mfinish()\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:768\u001b[0m, in \u001b[0;36mTrainer.fit\u001b[0;34m(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)\u001b[0m\n\u001b[1;32m 749\u001b[0m \u001b[39mr\u001b[39m\u001b[39m"""\u001b[39;00m\n\u001b[1;32m 750\u001b[0m \u001b[39mRuns the full optimization routine.\u001b[39;00m\n\u001b[1;32m 751\u001b[0m \n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 765\u001b[0m \u001b[39m datamodule: An instance of :class:~pytorch_lightning.core.datamodule.LightningDataModule
.\u001b[39;00m\n\u001b[1;32m 766\u001b[0m \u001b[39m"""\u001b[39;00m\n\u001b[1;32m 767\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mstrategy\u001b[39m.\u001b[39mmodel \u001b[39m=\u001b[39m model\n\u001b[0;32m--> 768\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_call_and_handle_interrupt(\n\u001b[1;32m 769\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path\n\u001b[1;32m 770\u001b[0m )\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:721\u001b[0m, in \u001b[0;36mTrainer.call_and_handle_interrupt\u001b[0;34m(self, trainer_fn, args, kwargs)\u001b[0m\n\u001b[1;32m 719\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mstrategy\u001b[39m.\u001b[39mlauncher\u001b[39m.\u001b[39mlaunch(trainer_fn, \u001b[39m\u001b[39margs, trainer\u001b[39m=\u001b[39m\u001b[39mself\u001b[39m, \u001b[39m\u001b[39m\u001b[39m\u001b[39mkwargs)\n\u001b[1;32m 720\u001b[0m \u001b[39melse\u001b[39;00m:\n\u001b[0;32m--> 721\u001b[0m \u001b[39mreturn\u001b[39;00m trainer_fn(\u001b[39m*\u001b[39;49margs, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\u001b[1;32m 722\u001b[0m \u001b[39m# TODO: treat KeyboardInterrupt as BaseException (delete the code below) in v1.7\u001b[39;00m\n\u001b[1;32m 723\u001b[0m \u001b[39mexcept\u001b[39;00m \u001b[39mKeyboardInterrupt\u001b[39;00m \u001b[39mas\u001b[39;00m exception:\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:809\u001b[0m, in \u001b[0;36mTrainer.fit_impl\u001b[0;34m(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)\u001b[0m\n\u001b[1;32m 805\u001b[0m ckpt_path \u001b[39m=\u001b[39m ckpt_path \u001b[39mor\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mresume_from_checkpoint\n\u001b[1;32m 806\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_ckpt_path \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m__set_ckpt_path(\n\u001b[1;32m 807\u001b[0m ckpt_path, model_provided\u001b[39m=\u001b[39m\u001b[39mTrue\u001b[39;00m, model_connected\u001b[39m=\u001b[39m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mlightning_module \u001b[39mis\u001b[39;00m \u001b[39mnot\u001b[39;00m \u001b[39mNone\u001b[39;00m\n\u001b[1;32m 808\u001b[0m )\n\u001b[0;32m--> 809\u001b[0m results \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_run(model, ckpt_path\u001b[39m=\u001b[39;49m\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mckpt_path)\n\u001b[1;32m 811\u001b[0m \u001b[39massert\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mstate\u001b[39m.\u001b[39mstopped\n\u001b[1;32m 812\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mtraining \u001b[39m=\u001b[39m \u001b[39mFalse\u001b[39;00m\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1234\u001b[0m, in \u001b[0;36mTrainer.run\u001b[0;34m(self, model, ckpt_path)\u001b[0m\n\u001b[1;32m 1230\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_checkpoint_connector\u001b[39m.\u001b[39mrestore_training_state()\n\u001b[1;32m 1232\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_checkpoint_connector\u001b[39m.\u001b[39mresume_end()\n\u001b[0;32m-> 1234\u001b[0m results \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_run_stage()\n\u001b[1;32m 1236\u001b[0m log\u001b[39m.\u001b[39mdetail(\u001b[39mf\u001b[39m\u001b[39m"\u001b[39m\u001b[39m{\u001b[39;00m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__class\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__name_\u001b[39m\u001b[39m}\u001b[39;00m\u001b[39m: trainer tearing down\u001b[39m\u001b[39m"\u001b[39m)\n\u001b[1;32m 1237\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_teardown()\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1321\u001b[0m, in \u001b[0;36mTrainer._run_stage\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 1319\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mpredicting:\n\u001b[1;32m 1320\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_run_predict()\n\u001b[0;32m-> 1321\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_run_train()\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1343\u001b[0m, in \u001b[0;36mTrainer.run_train\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 1340\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_pre_training_routine()\n\u001b[1;32m 1342\u001b[0m \u001b[39mwith\u001b[39;00m isolate_rng():\n\u001b[0;32m-> 1343\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_run_sanity_check()\n\u001b[1;32m 1345\u001b[0m \u001b[39m# enable train mode\u001b[39;00m\n\u001b[1;32m 1346\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mmodel\u001b[39m.\u001b[39mtrain()\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1411\u001b[0m, in \u001b[0;36mTrainer.run_sanity_check\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 1409\u001b[0m \u001b[39m# run eval step\u001b[39;00m\n\u001b[1;32m 1410\u001b[0m \u001b[39mwith\u001b[39;00m torch\u001b[39m.\u001b[39mno_grad():\n\u001b[0;32m-> 1411\u001b[0m val_loop\u001b[39m.\u001b[39;49mrun()\n\u001b[1;32m 1413\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_call_callback_hooks(\u001b[39m"\u001b[39m\u001b[39mon_sanity_check_end\u001b[39m\u001b[39m"\u001b[39m)\n\u001b[1;32m 1415\u001b[0m \u001b[39m# reset logger connector\u001b[39;00m\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/base.py:204\u001b[0m, in \u001b[0;36mLoop.run\u001b[0;34m(self, args, kwargs)\u001b[0m\n\u001b[1;32m 202\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m 203\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mon_advance_start(\u001b[39m\u001b[39margs, \u001b[39m\u001b[39m\u001b[39m\u001b[39mkwargs)\n\u001b[0;32m--> 204\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49madvance(\u001b[39m*\u001b[39;49margs, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\u001b[1;32m 205\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mon_advance_end()\n\u001b[1;32m 206\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_restarting \u001b[39m=\u001b[39m \u001b[39mFalse\u001b[39;00m\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py:154\u001b[0m, in \u001b[0;36mEvaluationLoop.advance\u001b[0;34m(self, args, kwargs)\u001b[0m\n\u001b[1;32m 152\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mnum_dataloaders \u001b[39m>\u001b[39m \u001b[39m1\u001b[39m:\n\u001b[1;32m 153\u001b[0m kwargs[\u001b[39m"\u001b[39m\u001b[39mdataloader_idx\u001b[39m\u001b[39m"\u001b[39m] \u001b[39m=\u001b[39m dataloader_idx\n\u001b[0;32m--> 154\u001b[0m dl_outputs \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mepoch_loop\u001b[39m.\u001b[39;49mrun(\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_data_fetcher, dl_max_batches, kwargs)\n\u001b[1;32m 156\u001b[0m \u001b[39m# store batch level output per dataloader\u001b[39;00m\n\u001b[1;32m 157\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_outputs\u001b[39m.\u001b[39mappend(dl_outputs)\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/base.py:204\u001b[0m, in \u001b[0;36mLoop.run\u001b[0;34m(self, args, kwargs)\u001b[0m\n\u001b[1;32m 202\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m 203\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mon_advance_start(\u001b[39m\u001b[39margs, \u001b[39m\u001b[39m\u001b[39m\u001b[39mkwargs)\n\u001b[0;32m--> 204\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49madvance(\u001b[39m\u001b[39;49margs, \u001b[39m\u001b[39;49m\u001b[39m\u001b[39;49mkwargs)\n\u001b[1;32m 205\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mon_advance_end()\n\u001b[1;32m 206\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_restarting \u001b[39m=\u001b[39m \u001b[39mFalse\u001b[39;00m\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py:127\u001b[0m, in \u001b[0;36mEvaluationEpochLoop.advance\u001b[0;34m(self, data_fetcher, dl_max_batches, kwargs)\u001b[0m\n\u001b[1;32m 124\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mbatch_progress\u001b[39m.\u001b[39mincrement_started()\n\u001b[1;32m 126\u001b[0m \u001b[39m# lightning module methods\u001b[39;00m\n\u001b[0;32m--> 127\u001b[0m output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_evaluation_step(\u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\u001b[1;32m 128\u001b[0m output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_evaluation_step_end(output)\n\u001b[1;32m 130\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mbatch_progress\u001b[39m.\u001b[39mincrement_processed()\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py:222\u001b[0m, in \u001b[0;36mEvaluationEpochLoop.evaluation_step\u001b[0;34m(self, kwargs)\u001b[0m\n\u001b[1;32m 220\u001b[0m output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mtrainer\u001b[39m.\u001b[39m_call_strategy_hook(\u001b[39m"\u001b[39m\u001b[39mtest_step\u001b[39m\u001b[39m"\u001b[39m, \u001b[39m\u001b[39mkwargs\u001b[39m.\u001b[39mvalues())\n\u001b[1;32m 221\u001b[0m \u001b[39melse\u001b[39;00m:\n\u001b[0;32m--> 222\u001b[0m output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mtrainer\u001b[39m.\u001b[39;49m_call_strategy_hook(\u001b[39m"\u001b[39;49m\u001b[39mvalidation_step\u001b[39;49m\u001b[39m"\u001b[39;49m, \u001b[39m\u001b[39;49mkwargs\u001b[39m.\u001b[39;49mvalues())\n\u001b[1;32m 224\u001b[0m \u001b[39mreturn\u001b[39;00m output\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1763\u001b[0m, in \u001b[0;36mTrainer.call_strategy_hook\u001b[0;34m(self, hook_name, *args, **kwargs)\u001b[0m\n\u001b[1;32m 1760\u001b[0m \u001b[39mreturn\u001b[39;00m\n\u001b[1;32m 1762\u001b[0m \u001b[39mwith\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mprofiler\u001b[39m.\u001b[39mprofile(\u001b[39mf\u001b[39m\u001b[39m"\u001b[39m\u001b[39m[Strategy]\u001b[39m\u001b[39m{\u001b[39;00m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mstrategy\u001b[39m.\u001b[39m\u001b[39m__class\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__name\u001b[39m\u001b[39m}\u001b[39;00m\u001b[39m.\u001b[39m\u001b[39m{\u001b[39;00mhook_name\u001b[39m}\u001b[39;00m\u001b[39m"\u001b[39m):\n\u001b[0;32m-> 1763\u001b[0m output \u001b[39m=\u001b[39m fn(\u001b[39m*\u001b[39;49margs, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\u001b[1;32m 1765\u001b[0m \u001b[39m# restore current_fx when nested context\u001b[39;00m\n\u001b[1;32m 1766\u001b[0m pl_module\u001b[39m.\u001b[39m_current_fx_name \u001b[39m=\u001b[39m prev_fx_name\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py:344\u001b[0m, in \u001b[0;36mStrategy.validation_step\u001b[0;34m(self, args, kwargs)\u001b[0m\n\u001b[1;32m 339\u001b[0m \u001b[39m"""The actual validation step.\u001b[39;00m\n\u001b[1;32m 340\u001b[0m \n\u001b[1;32m 341\u001b[0m \u001b[39mSee :meth:~pytorch_lightning.core.lightning.LightningModule.validation_step
for more details\u001b[39;00m\n\u001b[1;32m 342\u001b[0m \u001b[39m"""\u001b[39;00m\n\u001b[1;32m 343\u001b[0m \u001b[39mwith\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mprecision_plugin\u001b[39m.\u001b[39mval_step_context():\n\u001b[0;32m--> 344\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mmodel\u001b[39m.\u001b[39;49mvalidation_step(\u001b[39m\u001b[39;49margs, \u001b[39m\u001b[39;49m\u001b[39m\u001b[39;49mkwargs)\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_forecasting/models/base_model.py:420\u001b[0m, in \u001b[0;36mBaseModel.validation_step\u001b[0;34m(self, batch, batch_idx)\u001b[0m\n\u001b[1;32m 418\u001b[0m x, y \u001b[39m=\u001b[39m batch\n\u001b[1;32m 419\u001b[0m log, out \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mstep(x, y, batch_idx)\n\u001b[0;32m--> 420\u001b[0m log\u001b[39m.\u001b[39mupdate(\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mcreate_log(x, y, out, batch_idx))\n\u001b[1;32m 421\u001b[0m \u001b[39mreturn\u001b[39;00m log\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_forecasting/models/temporal_fusion_transformer/init.py:520\u001b[0m, in \u001b[0;36mTemporalFusionTransformer.create_log\u001b[0;34m(self, x, y, out, batch_idx, kwargs)\u001b[0m\n\u001b[1;32m 519\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mcreate_log\u001b[39m(\u001b[39mself\u001b[39m, x, y, out, batch_idx, \u001b[39m\u001b[39m\u001b[39m\u001b[39mkwargs):\n\u001b[0;32m--> 520\u001b[0m log \u001b[39m=\u001b[39m \u001b[39msuper\u001b[39;49m()\u001b[39m.\u001b[39;49mcreate_log(x, y, out, batch_idx, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\u001b[1;32m 521\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mlog_interval \u001b[39m>\u001b[39m \u001b[39m0\u001b[39m:\n\u001b[1;32m 522\u001b[0m log[\u001b[39m"\u001b[39m\u001b[39minterpretation\u001b[39m\u001b[39m"\u001b[39m] \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_log_interpretation(out)\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_forecasting/models/base_model.py:469\u001b[0m, in \u001b[0;36mBaseModel.create_log\u001b[0;34m(self, x, y, out, batch_idx, prediction_kwargs, quantiles_kwargs)\u001b[0m\n\u001b[1;32m 467\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mlog_metrics(x, y, out, prediction_kwargs\u001b[39m=\u001b[39mprediction_kwargs)\n\u001b[1;32m 468\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mlog_interval \u001b[39m>\u001b[39m \u001b[39m0\u001b[39m:\n\u001b[0;32m--> 469\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mlog_prediction(\n\u001b[1;32m 470\u001b[0m x, out, batch_idx, prediction_kwargs\u001b[39m=\u001b[39;49mprediction_kwargs, quantiles_kwargs\u001b[39m=\u001b[39;49mquantiles_kwargs\n\u001b[1;32m 471\u001b[0m )\n\u001b[1;32m 472\u001b[0m \u001b[39mreturn\u001b[39;00m {}\n\nFile \u001b[0;32m/mnt/c/Users/tomic/DataSnoop/forward_curve/.venv/lib/python3.10/site-packages/pytorch_forecasting/models/base_model.py:731\u001b[0m, in \u001b[0;36mBaseModel.log_prediction\u001b[0;34m(self, x, out, batch_idx, **kwargs)\u001b[0m\n\u001b[1;32m 725\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mlogger\u001b[39m.\u001b[39mexperiment\u001b[39m.\u001b[39madd_figure(\n\u001b[1;32m 726\u001b[0m \u001b[39mf\u001b[39m\u001b[39m"\u001b[39m\u001b[39m{\u001b[39;00m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mtarget_names[idx]\u001b[39m}\u001b[39;00m\u001b[39m \u001b[39m\u001b[39m{\u001b[39;00mtag\u001b[39m}\u001b[39;00m\u001b[39m"\u001b[39m,\n\u001b[1;32m 727\u001b[0m f,\n\u001b[1;32m 728\u001b[0m global_step\u001b[39m=\u001b[39m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mglobal_step,\n\u001b[1;32m 729\u001b[0m )\n\u001b[1;32m 730\u001b[0m \u001b[39melse\u001b[39;00m:\n\u001b[0;32m--> 731\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mlogger\u001b[39m.\u001b[39;49mexperiment\u001b[39m.\u001b[39;49madd_figure(\n\u001b[1;32m 732\u001b[0m tag,\n\u001b[1;32m 733\u001b[0m fig,\n\u001b[1;32m 734\u001b[0m global_step\u001b[39m=\u001b[39m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mglobal_step,\n\u001b[1;32m 735\u001b[0m )\n\n\u001b[0;31mAttributeError\u001b[0m: 'Run' object has no attribute 'add_figure'"
}
@mtomic123 this is an error from wandb, saying that add_figure
is not a wandb API. It doesn't have this function. Are you calling logger.experiment.add_figure()
somewhere in your code? If so, please change it to the native wandb api to log images and media: https://docs.wandb.ai/guides/track/log/media#images
or use the log_image method on the WandbLogger: https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.loggers.wandb.html
@mtomic123 this is an error from wandb, saying that
add_figure
is not a wandb API. It doesn't have this function. Are you callinglogger.experiment.add_figure()
somewhere in your code? If so, please change it to the native wandb api to log images and media: https://docs.wandb.ai/guides/track/log/media#imagesor use the log_image method on the WandbLogger: https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.loggers.wandb.html
I'm not logging any images as my model is being trained on time-series data. In fact, I haven't added anything extra related to Wandb, I just initiated the Wandb logger wandb_logger = WandbLogger(project="my_project")
and then passed this to Trainer as trainer = Trainer(logger=wandb_logger)
. Previously I had TensorBoard logger as the logger and it worked normally.
Can you get a cleaned-up stacktrace? The one you shared is a bit unreadable. A screenshot would be fine too.
Can you get a cleaned-up stacktrace? The one you shared is a bit unreadable. A screenshot would be fine too.
@mtomic123 this is an error from wandb, saying that
add_figure
is not a wandb API. It doesn't have this function. Are you callinglogger.experiment.add_figure()
somewhere in your code? If so, please change it to the native wandb api to log images and media: https://docs.wandb.ai/guides/track/log/media#imagesor use the log_image method on the WandbLogger: https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.loggers.wandb.html
I found the issue:
My TemporalFusionTransformer looked like this:
tft = TemporalFusionTransformer.from_dataset(
training,
learning_rate=0.03,
hidden_size=16,
attention_head_size=1,
dropout=0.1,
hidden_continuous_size=8,
output_size=7, # 7 quantiles by default
loss=QuantileLoss(),
log_interval=10,
reduce_on_plateau_patience=4,
)
The log_interval
argument was interfering and causing the issue, when I removed it, Wandb logging started working and no error was displayed.
@mtomic123 Great that you were able to resolve it yourself.
It is still a bit unsatisfying to me, not to know what the real reason was why it failed. If you could provide us with the right error message as requested by @carmocca, then we can take another look. Otherwise we would close the issue.
@awaelchli This comes from the pytorch-forecasting library which assumes a tensorboard logger and hence uses add_figure
. The code for that is here.
@manangoel99 You are right, thanks! It looks like this was already reported for other logger frameworks: https://github.com/jdb78/pytorch-forecasting/issues/983
I created a PR: https://github.com/jdb78/pytorch-forecasting/pull/1140