pytorch-forecasting icon indicating copy to clipboard operation
pytorch-forecasting copied to clipboard

Implement logger adapter to handle different loggers (TensorBoard and Wandb for now)

Open mikcnt opened this issue 3 years ago • 10 comments

Description

This PR resolves #89.

  • I created a base logger class (loggers.ForecastingLoggerBase) to handle different loggers (e.g., TensorBoard, Wandb) using the same syntax. This inherits from the PyTorch Lightning logger base class, and implements three new methods used across the PyTorch Forecasting project: add_figure, add_embeddings, and log_gradient_flow.
  • I implemented two new logger classes for TensorBoard and Wandb based on loggers.ForecastingLoggerBase, and inheriting from their PyTorch Lightning counterparts (respectively pytorch_lightning.loggers.TensorBoardLogger and pytorch_lightning.loggers.WandbLogger).
  • I modified the tests, replacing all the pytorch_lightning.loggers.TensorBoardLogger instances with the newly created loggers.ForecastingTensorBoardLogger.

As for the Wandb logger, loggers.ForecastingWandbLogger, I was wondering if someone could guide me on how to structure its tests.

Moreover, notice how the ForecastingWandbLogger's log_gradient_flow method is empty. This is due to the fact that Wandb automatically logs the distribution of gradients. It is sufficient to call logger.watch(model) upon initialization, as in the example below:

from pytorch_forecasting import TemporalFusionTransformer
from pytorch_forecasting.loggers import ForecastingWandbLogger

model = TemporalFusionTransformer(...)
logger = ForecastingWandbLogger(...)
logger.watch(model)

Finally, I noticed that the tests won't give all greens, even on the master branch, unless I don't move p.grad to the cpu inside the log_gradient_flow method of the BaseModel. Again, I empathize that this is still true in the master branch.

Checklist

  • [x] Linked issues (if existing)
  • [x] Amended changelog for large changes (and added myself there as contributor)
  • [x] Added/modified tests
  • [x] Used pre-commit hooks when committing to ensure that code is compliant with hooks. Install hooks with pre-commit install. To run hooks independent of commit, execute pre-commit run --all-files

mikcnt avatar Jan 02 '22 15:01 mikcnt

Codecov Report

Merging #816 (c6b98ec) into master (34bc7fc) will decrease coverage by 0.08%. The diff coverage is 87.83%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #816      +/-   ##
==========================================
- Coverage   89.05%   88.97%   -0.09%     
==========================================
  Files          24       28       +4     
  Lines        3829     3881      +52     
==========================================
+ Hits         3410     3453      +43     
- Misses        419      428       +9     
Flag Coverage Δ
cpu 88.97% <87.83%> (-0.09%) :arrow_down:
pytest 88.97% <87.83%> (-0.09%) :arrow_down:

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...torch_forecasting/loggers/wandb_logger/__init__.py 66.66% <66.66%> (ø)
pytorch_forecasting/loggers/base_logger.py 78.57% <78.57%> (ø)
pytorch_forecasting/loggers/__init__.py 100.00% <100.00%> (ø)
...forecasting/loggers/tensorboard_logger/__init__.py 100.00% <100.00%> (ø)
pytorch_forecasting/models/base_model.py 87.89% <100.00%> (-0.23%) :arrow_down:
pytorch_forecasting/models/nbeats/__init__.py 93.57% <100.00%> (ø)
...ing/models/temporal_fusion_transformer/__init__.py 97.29% <100.00%> (-0.03%) :arrow_down:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 34bc7fc...c6b98ec. Read the comment docs.

codecov-commenter avatar Jan 03 '22 14:01 codecov-commenter

Think we might also have to amend the tutorial notebooks

jdb78 avatar Jan 05 '22 16:01 jdb78

Yep, definitely. I'll give them a look tonight.

Do you also have any tip on how to structure tests for the logger adapter?

mikcnt avatar Jan 05 '22 17:01 mikcnt

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

I've updated the stallion notebook so that it uses the ForecastingTensorBoardLogger instead of the standard TensorBoardLogger.

mikcnt avatar Jan 05 '22 19:01 mikcnt

Any news on this?

mikcnt avatar Jan 22 '22 13:01 mikcnt

There actually turns out to be a nice workaround for this. You can add wandb.init(project='my-project', sync_tensorboard=True) followed by your PyTorch code and it'll nicely log everything to Wandb without issues.

For more info see here.

B-Deforce avatar May 26 '22 08:05 B-Deforce

Hey @B-Deforce! That's actually pretty nice, thanks for the heads up, I didn't know about that! So, you're suggesting to replace the ForecastingWandbLogger with the standard logger used in forecasting, and to add the wandb initialization with sync to tensorboard?

mikcnt avatar May 26 '22 08:05 mikcnt

Yes @mikcnt, I was actually also struggling to get Wandb running with Pytorch Forecasting and used your implementation above (which worked great btw). But then I found this neat little feature of wandb! Yes, that's correct. Just using the standard logger (which is the TensorBoardLogger) with the wandb init + sync works great.

B-Deforce avatar May 26 '22 09:05 B-Deforce

That's terrific! Actually, my original plan was to include a large variety of loggers, not just Tensorboard or Wandb (e.g., MLFlow), but, since it appears like this won't be merged anytime soon, I guess I would go with your solution too if I were in your position! 😆

mikcnt avatar May 26 '22 09:05 mikcnt