pytorch-lightning icon indicating copy to clipboard operation
pytorch-lightning copied to clipboard

Support `log_every_n_steps` with validate|test

Open carmocca opened this issue 2 years ago • 4 comments

What does this PR do?

Fixes https://github.com/Lightning-AI/lightning/issues/10436

Logging per step during fit's validation, regular validation, testing, or predicting is not generally useful when the logged values depend on the optimization process. However, sometimes you want to log values that do not depend on the optimization process. Examples of this are throughput-related metrics (as in #18848) or batch-specific metrics.

This PR adds support for tuning the logging interval under these circumstances.

This is only a breaking change if the user was calling self.log(..., on_step=True). The previous behavior is equal to setting log_every_n_steps=1 now. Before, this value was ignored.


:books: Documentation preview :books:: https://pytorch-lightning--18895.org.readthedocs.build/en/18895/

cc @borda @carmocca @justusschock

carmocca avatar Oct 30 '23 19:10 carmocca

⛈️ Required checks status: Has failure 🔴

Warning This job will need to be re-run to merge your PR. If you do not have write access to the repository, you can ask Lightning-AI/lai-frameworks to re-run it. If you push a new commit, all of CI will re-trigger.

Groups summary

🔴 pytorch_lightning: Tests workflow
Check ID Status
pl-cpu (macOS-11, lightning, 3.8, 1.12, oldest) failure
pl-cpu (macOS-11, lightning, 3.9, 1.12) failure
pl-cpu (macOS-11, lightning, 3.10, 1.13) failure
pl-cpu (macOS-11, lightning, 3.10, 2.0) failure
pl-cpu (macOS-11, lightning, 3.10, 2.1) failure
pl-cpu (ubuntu-20.04, lightning, 3.8, 1.12, oldest) failure
pl-cpu (ubuntu-20.04, lightning, 3.9, 1.12) failure
pl-cpu (ubuntu-20.04, lightning, 3.10, 1.13) failure
pl-cpu (ubuntu-20.04, lightning, 3.10, 2.0) failure
pl-cpu (ubuntu-20.04, lightning, 3.10, 2.1) failure
pl-cpu (windows-2022, lightning, 3.8, 1.12, oldest) failure
pl-cpu (windows-2022, lightning, 3.9, 1.12) failure
pl-cpu (windows-2022, lightning, 3.10, 1.13) failure
pl-cpu (windows-2022, lightning, 3.10, 2.0) failure
pl-cpu (windows-2022, lightning, 3.10, 2.1) failure
pl-cpu (macOS-11, pytorch, 3.8, 1.13) failure
pl-cpu (ubuntu-20.04, pytorch, 3.8, 1.13) failure
pl-cpu (windows-2022, pytorch, 3.8, 1.13) failure
pl-cpu (macOS-12, pytorch, 3.11, 2.0) failure
pl-cpu (macOS-12, pytorch, 3.11, 2.1) failure
pl-cpu (ubuntu-22.04, pytorch, 3.11, 2.0) failure
pl-cpu (ubuntu-22.04, pytorch, 3.11, 2.1) failure
pl-cpu (windows-2022, pytorch, 3.11, 2.0) failure
pl-cpu (windows-2022, pytorch, 3.11, 2.1) failure

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py, tests/tests_pytorch/trainer/logging_/test_eval_loop_logging.py.

🟢 pytorch_lightning: Azure GPU
Check ID Status
pytorch-lightning (GPUs) (testing Lightning | latest) success
pytorch-lightning (GPUs) (testing PyTorch | latest) success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py, tests/tests_pytorch/trainer/logging_/test_eval_loop_logging.py.

🟢 pytorch_lightning: Benchmarks
Check ID Status
lightning.Benchmarks success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.

🟢 pytorch_lightning: Docs
Check ID Status
docs-make (pytorch, doctest) success
docs-make (pytorch, html) success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.

🟢 mypy
Check ID Status
mypy success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.

🟢 install
Check ID Status
install-pkg (ubuntu-22.04, app, 3.8) success
install-pkg (ubuntu-22.04, app, 3.11) success
install-pkg (ubuntu-22.04, fabric, 3.8) success
install-pkg (ubuntu-22.04, fabric, 3.11) success
install-pkg (ubuntu-22.04, pytorch, 3.8) success
install-pkg (ubuntu-22.04, pytorch, 3.11) success
install-pkg (ubuntu-22.04, lightning, 3.8) success
install-pkg (ubuntu-22.04, lightning, 3.11) success
install-pkg (ubuntu-22.04, notset, 3.8) success
install-pkg (ubuntu-22.04, notset, 3.11) success
install-pkg (macOS-12, app, 3.8) success
install-pkg (macOS-12, app, 3.11) success
install-pkg (macOS-12, fabric, 3.8) success
install-pkg (macOS-12, fabric, 3.11) success
install-pkg (macOS-12, pytorch, 3.8) success
install-pkg (macOS-12, pytorch, 3.11) success
install-pkg (macOS-12, lightning, 3.8) success
install-pkg (macOS-12, lightning, 3.11) success
install-pkg (macOS-12, notset, 3.8) success
install-pkg (macOS-12, notset, 3.11) success
install-pkg (windows-2022, app, 3.8) success
install-pkg (windows-2022, app, 3.11) success
install-pkg (windows-2022, fabric, 3.8) success
install-pkg (windows-2022, fabric, 3.11) success
install-pkg (windows-2022, pytorch, 3.8) success
install-pkg (windows-2022, pytorch, 3.11) success
install-pkg (windows-2022, lightning, 3.8) success
install-pkg (windows-2022, lightning, 3.11) success
install-pkg (windows-2022, notset, 3.8) success
install-pkg (windows-2022, notset, 3.11) success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.


Thank you for your contribution! 💜

Note This comment is automatically generated and updates for 60 minutes every 180 seconds. If you have any other questions, contact carmocca for help.

github-actions[bot] avatar Nov 18 '23 23:11 github-actions[bot]

Hey @carmocca There are some significant differences here that should be discussed. Here is my simple example:

import os

import torch

from lightning import seed_everything
from lightning.pytorch import LightningModule, Trainer
from torch.utils.data import DataLoader, Dataset

from lightning.pytorch.loggers import TensorBoardLogger


class RandomDataset(Dataset):
    def __init__(self, size, length):
        self.len = length
        self.data = torch.randn(length, size)

    def __getitem__(self, index):
        return self.data[index]

    def __len__(self):
        return self.len


class BoringModel(LightningModule):
    def __init__(self):
        super().__init__()
        self.layer = torch.nn.Linear(32, 2)

    def forward(self, x):
        return self.layer(x)

    def training_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("train_loss", loss)
        return {"loss": loss}

    def validation_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("valid_loss", loss)

    def test_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("test_loss", loss)

    def configure_optimizers(self):
        return torch.optim.SGD(self.layer.parameters(), lr=0.1)


def run():
    seed_everything(0)
    train_data = DataLoader(RandomDataset(32, 64), batch_size=2)
    val_data = DataLoader(RandomDataset(32, 64), batch_size=2)
    test_data = DataLoader(RandomDataset(32, 64), batch_size=2)

    logger = TensorBoardLogger("tb_logs", name="my_model")

    model = BoringModel()
    trainer = Trainer(
        default_root_dir=os.getcwd(),
        logger=logger,
        limit_train_batches=10,
        limit_val_batches=10,
        limit_test_batches=10,
        num_sanity_val_steps=4,
        max_epochs=3,
        enable_model_summary=False,
        log_every_n_steps=2,
    )
    trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
    trainer.validate(model, val_data)
    trainer.test(model, dataloaders=test_data)


if __name__ == "__main__":
    run()

Running tensorboard --logdir tb_logs and comparing master with this branch.

Red: master Blue: this PR

Issue 1: Here we see that the new behavior no longer logs the epoch value at the epoch-end step (blue stops at step 29).

image

It falling short now means there would be a gap when resuming:

image

For this reason, I believe the previous behavior should be kept.

Issue 2:

Here the test loss no longer gets logged at the global step, but the step is local to the test stage.

image

Issue 3:

Finally, a few things are going on for the validation. 1) The fit-validation values are no longer logged, because they don't fall into the logging interval (they are epoch-end). 2) Without using the global step for logging fit-validation logging, you can no longer track the validation loss across your training. It all gets logged to the same step value.

image

awaelchli avatar Nov 21 '23 08:11 awaelchli

I'll look at Issue 1 and 3. Thanks. Issue 2 is expected and the point of this PR, that is, testing using its own index instead of the training global step.

carmocca avatar Nov 21 '23 15:11 carmocca

⚠️ GitGuardian has uncovered 2 secrets following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secrets in your pull request
GitGuardian id Secret Commit Filename
- Generic High Entropy Secret 78fa3afdfbf964c19b4b2d36b91560698aa83178 tests/tests_app/utilities/test_login.py View secret
- Base64 Basic Authentication 78fa3afdfbf964c19b4b2d36b91560698aa83178 tests/tests_app/utilities/test_login.py View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Our GitHub checks need improvements? Share your feedbacks!

gitguardian[bot] avatar Jan 16 '24 09:01 gitguardian[bot]