pytorch-lightning
pytorch-lightning copied to clipboard
Reset trainer variable `should_stop` when `fit` is called
What does this PR do?
Reset trainer variable should_stop when fit is called
If fit is called after early stopping has already stopped training, then the model will not continue training as the trainer flag should_stop is currently not reset when fit is called.
Change this to reset should_stop every time fit is called
Fixes #18727
Before submitting
- Was this discussed/agreed via a GitHub issue? (not for typos and docs)
- [ ] Did you read the contributor guideline, Pull Request section?
- [ ] Did you make sure your PR does only one thing, instead of bundling different changes together?
- Did you make sure to update the documentation with your changes? (if necessary)
- Did you write any new necessary tests? (not for typos and docs)
- [ ] Did you verify new and existing tests pass locally with your changes?
- Did you list all the breaking changes introduced by this pull request?
- Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)
PR review
Anyone in the community is welcome to review the PR. Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:
Reviewer checklist
- [ ] Is this pull request ready for review? (if not, please submit in draft mode)
- [ ] Check that all items from Before submitting are resolved
- [ ] Make sure the title is self-explanatory and the description concisely explains the PR
- [ ] Add labels and milestones (and optionally projects) to the PR so it can be classified
📚 Documentation preview 📚: https://pytorch-lightning--19177.org.readthedocs.build/en/19177/
seems this is failing on a test that is designed to make sure the trainer stays as should_stop=True related to #15708
@pytest.mark.parametrize(("min_epochs", "min_steps", "val_count"), [(3, None, 3), (None, 3, 2)])
def test_should_stop_triggers_validation_once(min_epochs, min_steps, val_count, tmp_path):
"""Regression test for issue #15708.
Test that the request for `should_stop=True` only triggers validation when Trainer is allowed to stop
(min_epochs/steps is satisfied).
"""
model = BoringModel()
trainer = Trainer(
default_root_dir=tmp_path,
num_sanity_val_steps=0,
limit_val_batches=2,
limit_train_batches=2,
max_epochs=3,
min_epochs=min_epochs,
min_steps=min_steps,
enable_model_summary=False,
enable_checkpointing=False,
)
trainer.should_stop = True # Request to stop before min_epochs/min_steps are reached
trainer.fit_loop.epoch_loop.val_loop.run = Mock()
trainer.fit(model)
assert trainer.fit_loop.epoch_loop.val_loop.run.call_count == val_count
I have changed the above test to use an EarlyStopping condition instead of setting the flag through trainer.should_stop=True such that this test now passes with the following
+ class NewBoring(BoringModel):
+ def training_step(self, batch, batch_idx):
+ self.log("loss", self.step(batch))
+ return {"loss": self.step(batch)}
+
- model = BoringModel()
+ model = NewBoring()
+ # create a stopping condition with a high threshold so it triggers immediately
+ # check the condition before validation so the count is unaffected
+ stopping = EarlyStopping(monitor="loss", check_on_train_epoch_end=True, stopping_threshold=100)
trainer = Trainer(
default_root_dir=tmp_path,
num_sanity_val_steps=0,
limit_val_batches=2,
limit_train_batches=2,
max_epochs=3,
min_epochs=min_epochs,
min_steps=min_steps,
enable_model_summary=False,
enable_checkpointing=False,
callbacks=[stopping],
)
- trainer.should_stop = True # Request to stop before min_epochs/min_steps are reached
trainer.fit_loop.epoch_loop.val_loop.run = Mock()
trainer.fit(model)
assert trainer.fit_loop.epoch_loop.val_loop.run.call_count == val_count
️✅ There are no secrets present in this pull request anymore.
If these secrets were true positive and are still valid, we highly recommend you to revoke them. Once a secret has been leaked into a git repository, you should consider it compromised, even if it was deleted immediately. Find here more information about risks.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
Our GitHub checks need improvements? Share your feedbacks!
Codecov Report
Merging #19177 (005209c) into master (2a827f3) will decrease coverage by
35%. The diff coverage is100%.
Additional details and impacted files
@@ Coverage Diff @@
## master #19177 +/- ##
==========================================
- Coverage 83% 48% -35%
==========================================
Files 450 442 -8
Lines 38250 38098 -152
==========================================
- Hits 31893 18438 -13455
- Misses 6357 19660 +13303
is this PR in progress??