pytorch-lightning icon indicating copy to clipboard operation
pytorch-lightning copied to clipboard

Fix `BatchSizeFinder` leaving model in train state

Open tanaymeh opened this issue 1 year ago • 11 comments

What does this PR do?

This PR patches the bug where BatchSizeFinder would leave the model in train state if used with trainer.validate

Fixes #18813

Before submitting
  • [x] Was this discussed/agreed via a GitHub issue? (not for typos and docs)
  • [x] Did you read the contributor guideline, Pull Request section?
  • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
  • [ ] Did you make sure to update the documentation with your changes? (if necessary)
  • [ ] Did you write any new necessary tests? (not for typos and docs)
  • [ ] Did you verify new and existing tests pass locally with your changes?
  • [ ] Did you list all the breaking changes introduced by this pull request?
  • [ ] Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

@BoringDonut

Anyone in the community is welcome to review the PR. Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

Reviewer checklist
  • [x] Is this pull request ready for review? (if not, please submit in draft mode)
  • [x] Check that all items from Before submitting are resolved
  • [x] Make sure the title is self-explanatory and the description concisely explains the PR
  • [x] Add labels and milestones (and optionally projects) to the PR so it can be classified

:books: Documentation preview :books:: https://pytorch-lightning--18826.org.readthedocs.build/en/18826/

tanaymeh avatar Oct 19 '23 14:10 tanaymeh

Other than aforementioned missing arg, this PR seems to fix the issue for val/test/prediction, thanks a lot

BoringDonut avatar Oct 19 '23 14:10 BoringDonut

@BoringDonut Thanks for the review! I have added the requested changes, can you please review it again?

Thanks again!

tanaymeh avatar Oct 19 '23 14:10 tanaymeh

Codecov Report

Merging #18826 (591eb4e) into master (b8a96fe) will decrease coverage by 35%. Report is 1 commits behind head on master. The diff coverage is 100%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #18826      +/-   ##
==========================================
- Coverage      84%      49%     -35%     
==========================================
  Files         443      435       -8     
  Lines       36123    35974     -149     
==========================================
- Hits        30231    17579   -12652     
- Misses       5892    18395   +12503     

codecov[bot] avatar Oct 19 '23 15:10 codecov[bot]

@tanaymeh @BoringDonut have you found where the batch size tuner calls model.train()? Ideally, we would want to make the finder not change it in the first place. This would be the real fix IMO. We should leave the responsibility of calling model.train()/eval() to the respective loop IMO.

awaelchli avatar Oct 21 '23 14:10 awaelchli

@tanaymeh @BoringDonut have you found where the batch size tuner calls model.train()? Ideally, we would want to make the finder not change it in the first place. This would be the real fix IMO. We should leave the responsibility of calling model.train()/eval() to the respective loop IMO.

@awaelchli I didn't find out the place where the bug is happening but I am working on it!

tanaymeh avatar Oct 21 '23 14:10 tanaymeh

have you found where the batch size tuner calls model.train()?

@tanaymeh @awaelchli

I think I found it Call trace looks like that

1.https://github.com/Lightning-AI/lightning/blob/e7afe04ee86b64c76a5446088b3b75d9c275e5bf/src/lightning/pytorch/tuner/batch_size_scaling.py#L333 which redirects to 2. https://github.com/Lightning-AI/lightning/blob/e7afe04ee86b64c76a5446088b3b75d9c275e5bf/src/lightning/pytorch/loops/evaluation_loop.py#L108 Last line of which calls to return self.on_run_end() 3. https://github.com/Lightning-AI/lightning/blob/e7afe04ee86b64c76a5446088b3b75d9c275e5bf/src/lightning/pytorch/loops/evaluation_loop.py#L246 Aaaaand this methods assumes it runs within the train loop, so it converts model to the train state 4. https://github.com/Lightning-AI/lightning/blob/e7afe04ee86b64c76a5446088b3b75d9c275e5bf/src/lightning/pytorch/loops/evaluation_loop.py#L270 This _on_evaluation_model_train ends up in 5. https://github.com/Lightning-AI/lightning/blob/e7afe04ee86b64c76a5446088b3b75d9c275e5bf/src/lightning/pytorch/core/hooks.py#L166

Basically it seems BatchSizeFinder should either not call _EvaluationLoop through .run(), as this method would unconditionally turn model to a train state Or it indeed should compensate by calling to model.eval if it understands that it works with an _EvaluationLoop I don't see an easy way to modify _EvaluationLoop.run() to address it without too much of a hustle As such, i think solution by @tanaymeh looks reasonable. I guess to be more consistent with existing BatchSizeFinder style this call to .eval can be done in BatchSizeFinder.__scale_batch_restore_params

BoringDonut avatar Oct 21 '23 22:10 BoringDonut

@BoringDonut I added your previously suggested changes of moving the is_training inside the scale_batch_size function.

Please let me know if I should add any more changes

cc: @awaelchli

tanaymeh avatar Oct 22 '23 19:10 tanaymeh

Do you think you can add a test to tests/tests_pytorch/tuner/test_scale_batch_size.py?

@carmocca Should the test be similar to @BoringDonut's script (here) that is used to test if the model produces the same output in train and eval modes?

tanaymeh avatar Oct 25 '23 14:10 tanaymeh

I would suggest calling _scale_batch_size directly with some unittest Mocks, and assert that the original training state is preserved

carmocca avatar Oct 27 '23 17:10 carmocca

I would suggest calling _scale_batch_size directly with some unittest Mocks, and assert that the original training state is preserved

@carmocca Should create a mock for the trainer and lightning module as well?

from unittest.mock import Mock

...

trainer = Mock()
lightning_module = Mock()
trainer.lightning_module = lightning_module

...

tanaymeh avatar Oct 30 '23 20:10 tanaymeh

⚠️ GitGuardian has uncovered 2 secrets following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secrets in your pull request
GitGuardian id Secret Commit Filename
- Generic High Entropy Secret 78fa3afdfbf964c19b4b2d36b91560698aa83178 tests/tests_app/utilities/test_login.py View secret
- Base64 Basic Authentication 78fa3afdfbf964c19b4b2d36b91560698aa83178 tests/tests_app/utilities/test_login.py View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Our GitHub checks need improvements? Share your feedbacks!

gitguardian[bot] avatar Jan 16 '24 09:01 gitguardian[bot]