Support `log_every_n_steps` with validate|test
What does this PR do?
Fixes https://github.com/Lightning-AI/lightning/issues/10436
Logging per step during fit's validation, regular validation, testing, or predicting is not generally useful when the logged values depend on the optimization process. However, sometimes you want to log values that do not depend on the optimization process. Examples of this are throughput-related metrics (as in #18848) or batch-specific metrics.
This PR adds support for tuning the logging interval under these circumstances.
This is only a breaking change if the user was calling self.log(..., on_step=True).
The previous behavior is equal to setting log_every_n_steps=1 now. Before, this value was ignored.
:books: Documentation preview :books:: https://pytorch-lightning--18895.org.readthedocs.build/en/18895/
cc @borda @carmocca @justusschock
⛈️ Required checks status: Has failure 🔴
Warning This job will need to be re-run to merge your PR. If you do not have write access to the repository, you can ask
Lightning-AI/lai-frameworksto re-run it. If you push a new commit, all of CI will re-trigger.
Groups summary
🔴 pytorch_lightning: Tests workflow
These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py, tests/tests_pytorch/trainer/logging_/test_eval_loop_logging.py.
🟢 pytorch_lightning: Azure GPU
| Check ID | Status | |
|---|---|---|
| pytorch-lightning (GPUs) (testing Lightning | latest) | success | ✅ |
| pytorch-lightning (GPUs) (testing PyTorch | latest) | success | ✅ |
These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py, tests/tests_pytorch/trainer/logging_/test_eval_loop_logging.py.
🟢 pytorch_lightning: Benchmarks
| Check ID | Status | |
|---|---|---|
| lightning.Benchmarks | success | ✅ |
These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.
🟢 pytorch_lightning: Docs
| Check ID | Status | |
|---|---|---|
| docs-make (pytorch, doctest) | success | ✅ |
| docs-make (pytorch, html) | success | ✅ |
These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.
🟢 mypy
| Check ID | Status | |
|---|---|---|
| mypy | success | ✅ |
These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.
🟢 install
These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.
Thank you for your contribution! 💜
Note This comment is automatically generated and updates for 60 minutes every 180 seconds. If you have any other questions, contact
carmoccafor help.
Hey @carmocca There are some significant differences here that should be discussed. Here is my simple example:
import os
import torch
from lightning import seed_everything
from lightning.pytorch import LightningModule, Trainer
from torch.utils.data import DataLoader, Dataset
from lightning.pytorch.loggers import TensorBoardLogger
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("train_loss", loss)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("valid_loss", loss)
def test_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("test_loss", loss)
def configure_optimizers(self):
return torch.optim.SGD(self.layer.parameters(), lr=0.1)
def run():
seed_everything(0)
train_data = DataLoader(RandomDataset(32, 64), batch_size=2)
val_data = DataLoader(RandomDataset(32, 64), batch_size=2)
test_data = DataLoader(RandomDataset(32, 64), batch_size=2)
logger = TensorBoardLogger("tb_logs", name="my_model")
model = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
logger=logger,
limit_train_batches=10,
limit_val_batches=10,
limit_test_batches=10,
num_sanity_val_steps=4,
max_epochs=3,
enable_model_summary=False,
log_every_n_steps=2,
)
trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
trainer.validate(model, val_data)
trainer.test(model, dataloaders=test_data)
if __name__ == "__main__":
run()
Running tensorboard --logdir tb_logs and comparing master with this branch.
Red: master Blue: this PR
Issue 1: Here we see that the new behavior no longer logs the epoch value at the epoch-end step (blue stops at step 29).
It falling short now means there would be a gap when resuming:
For this reason, I believe the previous behavior should be kept.
Issue 2:
Here the test loss no longer gets logged at the global step, but the step is local to the test stage.
Issue 3:
Finally, a few things are going on for the validation. 1) The fit-validation values are no longer logged, because they don't fall into the logging interval (they are epoch-end). 2) Without using the global step for logging fit-validation logging, you can no longer track the validation loss across your training. It all gets logged to the same step value.
I'll look at Issue 1 and 3. Thanks. Issue 2 is expected and the point of this PR, that is, testing using its own index instead of the training global step.
⚠️ GitGuardian has uncovered 2 secrets following the scan of your pull request.
Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.
🔎 Detected hardcoded secrets in your pull request
| GitGuardian id | Secret | Commit | Filename | |
|---|---|---|---|---|
| - | Generic High Entropy Secret | 78fa3afdfbf964c19b4b2d36b91560698aa83178 | tests/tests_app/utilities/test_login.py | View secret |
| - | Base64 Basic Authentication | 78fa3afdfbf964c19b4b2d36b91560698aa83178 | tests/tests_app/utilities/test_login.py | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secret safely. Learn here the best practices.
- Revoke and rotate this secret.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
Our GitHub checks need improvements? Share your feedbacks!