pytorch-forecasting
pytorch-forecasting copied to clipboard
Implement logger adapter to handle different loggers (TensorBoard and Wandb for now)
Description
This PR resolves #89.
- I created a base logger class (
loggers.ForecastingLoggerBase) to handle different loggers (e.g., TensorBoard, Wandb) using the same syntax. This inherits from the PyTorch Lightning logger base class, and implements three new methods used across the PyTorch Forecasting project:add_figure,add_embeddings, andlog_gradient_flow. - I implemented two new logger classes for TensorBoard and Wandb based on
loggers.ForecastingLoggerBase, and inheriting from their PyTorch Lightning counterparts (respectivelypytorch_lightning.loggers.TensorBoardLoggerandpytorch_lightning.loggers.WandbLogger). - I modified the tests, replacing all the
pytorch_lightning.loggers.TensorBoardLoggerinstances with the newly createdloggers.ForecastingTensorBoardLogger.
As for the Wandb logger, loggers.ForecastingWandbLogger, I was wondering if someone could guide me on how to structure its tests.
Moreover, notice how the ForecastingWandbLogger's log_gradient_flow method is empty. This is due to the fact that Wandb automatically logs the distribution of gradients. It is sufficient to call logger.watch(model) upon initialization, as in the example below:
from pytorch_forecasting import TemporalFusionTransformer
from pytorch_forecasting.loggers import ForecastingWandbLogger
model = TemporalFusionTransformer(...)
logger = ForecastingWandbLogger(...)
logger.watch(model)
Finally, I noticed that the tests won't give all greens, even on the master branch, unless I don't move p.grad to the cpu inside the log_gradient_flow method of the BaseModel. Again, I empathize that this is still true in the master branch.
Checklist
- [x] Linked issues (if existing)
- [x] Amended changelog for large changes (and added myself there as contributor)
- [x] Added/modified tests
- [x] Used pre-commit hooks when committing to ensure that code is compliant with hooks. Install hooks with
pre-commit install. To run hooks independent of commit, executepre-commit run --all-files
Codecov Report
Merging #816 (c6b98ec) into master (34bc7fc) will decrease coverage by
0.08%. The diff coverage is87.83%.
@@ Coverage Diff @@
## master #816 +/- ##
==========================================
- Coverage 89.05% 88.97% -0.09%
==========================================
Files 24 28 +4
Lines 3829 3881 +52
==========================================
+ Hits 3410 3453 +43
- Misses 419 428 +9
| Flag | Coverage Δ | |
|---|---|---|
| cpu | 88.97% <87.83%> (-0.09%) |
:arrow_down: |
| pytest | 88.97% <87.83%> (-0.09%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
| Impacted Files | Coverage Δ | |
|---|---|---|
| ...torch_forecasting/loggers/wandb_logger/__init__.py | 66.66% <66.66%> (ø) |
|
| pytorch_forecasting/loggers/base_logger.py | 78.57% <78.57%> (ø) |
|
| pytorch_forecasting/loggers/__init__.py | 100.00% <100.00%> (ø) |
|
| ...forecasting/loggers/tensorboard_logger/__init__.py | 100.00% <100.00%> (ø) |
|
| pytorch_forecasting/models/base_model.py | 87.89% <100.00%> (-0.23%) |
:arrow_down: |
| pytorch_forecasting/models/nbeats/__init__.py | 93.57% <100.00%> (ø) |
|
| ...ing/models/temporal_fusion_transformer/__init__.py | 97.29% <100.00%> (-0.03%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact),ø = not affected,? = missing dataPowered by Codecov. Last update 34bc7fc...c6b98ec. Read the comment docs.
Think we might also have to amend the tutorial notebooks
Yep, definitely. I'll give them a look tonight.
Do you also have any tip on how to structure tests for the logger adapter?
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
I've updated the stallion notebook so that it uses the ForecastingTensorBoardLogger instead of the standard TensorBoardLogger.
Any news on this?
There actually turns out to be a nice workaround for this.
You can add wandb.init(project='my-project', sync_tensorboard=True) followed by your PyTorch code and it'll nicely log everything to Wandb without issues.
For more info see here.
Hey @B-Deforce! That's actually pretty nice, thanks for the heads up, I didn't know about that! So, you're suggesting to replace the ForecastingWandbLogger with the standard logger used in forecasting, and to add the wandb initialization with sync to tensorboard?
Yes @mikcnt, I was actually also struggling to get Wandb running with Pytorch Forecasting and used your implementation above (which worked great btw). But then I found this neat little feature of wandb! Yes, that's correct. Just using the standard logger (which is the TensorBoardLogger) with the wandb init + sync works great.
That's terrific! Actually, my original plan was to include a large variety of loggers, not just Tensorboard or Wandb (e.g., MLFlow), but, since it appears like this won't be merged anytime soon, I guess I would go with your solution too if I were in your position! 😆