pytorch-lightning icon indicating copy to clipboard operation
pytorch-lightning copied to clipboard

Add learning rate scheduling support for `DeepSpeedStrategy`

Open amorehead opened this issue 1 year ago • 11 comments

What does this PR do?

  • Adds learning rate scheduling support for DeepSpeedStrategy
  • Credit to lvhoaa for suggesting this change to make Fabric's support for internal DeepSpeed features even more robust
Before submitting
  • Was this discussed/agreed via a GitHub issue? (not for typos and docs) N
  • x ] Did you read the contributor guideline, Pull Request section?
  • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • [ ] Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

Anyone in the community is welcome to review the PR. Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

Reviewer checklist
  • [x] Is this pull request ready for review? (if not, please submit in draft mode)
  • [ ] Check that all items from Before submitting are resolved
  • [x] Make sure the title is self-explanatory and the description concisely explains the PR
  • [x] Add labels and milestones (and optionally projects) to the PR so it can be classified

📚 Documentation preview 📚: https://pytorch-lightning--20320.org.readthedocs.build/en/20320/

amorehead avatar Oct 05 '24 02:10 amorehead

Thanks for the contribution @amorehead! Let's get to a green CI and take it from there

lantiga avatar Oct 07 '24 11:10 lantiga

hey @amorehead looks like CI failures are legit, let me know if you can fix those

lantiga avatar Nov 12 '24 22:11 lantiga

@amorehead I'm wrapping up the last few PRs for the release. Do you have time to fix this one in the next couple of days?

lantiga avatar Dec 10 '24 22:12 lantiga

@lantiga, apologies. Just now getting to fixing this pull request up. I've updated the docs/source-fabric/api/fabric_methods.rst file. Are there any other relevant docs I've missed? I believe I've already updated all the relevant docstrings for each affected Strategy such as the DeepSpeedStrategy, so these corresponding docstring docs should already be updated.

amorehead avatar Jan 09 '25 19:01 amorehead

Codecov Report

:x: Patch coverage is 58.82353% with 7 lines in your changes missing coverage. Please review. :white_check_mark: Project coverage is 87%. Comparing base (6e90049) to head (35d716f). :warning: Report is 198 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #20320   +/-   ##
=======================================
- Coverage      87%      87%   -0%     
=======================================
  Files         268      268           
  Lines       23453    23460    +7     
=======================================
- Hits        20404    20399    -5     
- Misses       3049     3061   +12     

codecov[bot] avatar Jan 09 '25 21:01 codecov[bot]

@amorehead mind check the last failng case:

FAILED strategies/test_model_parallel.py::test_parallelize_fn_call - ValueError: too many values to unpack (expected 2)

Borda avatar Mar 14 '25 12:03 Borda

@Borda, I've just fixed this test

amorehead avatar Mar 15 '25 15:03 amorehead

@Borda, I've just fixed this test

seems one left:

FAILED strategies/test_deepspeed.py::test_deepspeed_setup_module - AssertionError: expected call not found.
Expected: initialize(args=<ANY>, config={'activation_checkpointing': {'partition_activations': False, 'cpu_checkpointing': False, 'contiguous_memory_optimization': False, 'synchronize_checkpoint_boundary': False}, 'aio': {'block_size': 1048576, 'queue_depth': 8, 'single_submit': False, 'overlap_events': True, 'thread_count': 1}, 'zero_allow_untested_optimizer': True, 'zero_optimization': {'stage': 2, 'contiguous_gradients': True, 'overlap_comm': True, 'allgather_partitions': True, 'reduce_scatter': True, 'allgather_bucket_size': 200000000, 'reduce_bucket_size': 200000000, 'sub_group_size': 1000000000000}}, model=<Mock id='140346019654640'>, model_parameters=<ANY>, optimizer=None, dist_init_required=False)
Actual: initialize(args=Namespace(device_rank=1), config={'activation_checkpointing': {'partition_activations': False, 'cpu_checkpointing': False, 'contiguous_memory_optimization': False, 'synchronize_checkpoint_boundary': False}, 'aio': {'block_size': 1048576, 'queue_depth': 8, 'single_submit': False, 'overlap_events': True, 'thread_count': 1}, 'zero_allow_untested_optimizer': True, 'zero_optimization': {'stage': 2, 'contiguous_gradients': True, 'overlap_comm': True, 'allgather_partitions': True, 'reduce_scatter': True, 'allgather_bucket_size': 200000000, 'reduce_bucket_size': 200000000, 'sub_group_size': 1000000000000}}, model=<Mock id='140346019654640'>, model_parameters=<filter object at 0x7fa4daac0e80>, optimizer=None, lr_scheduler=None, dist_init_required=False)

Borda avatar Mar 17 '25 11:03 Borda

@Borda, let's see if this latest commit of mine fixes it.

amorehead avatar Mar 19 '25 14:03 amorehead

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. If you need further help see our docs: https://lightning.ai/docs/pytorch/latest/generated/CONTRIBUTING.html#pull-request or ask the assistance of a core contributor here or on Discord. Thank you for your contributions.

stale[bot] avatar Apr 16 '25 05:04 stale[bot]

@Borda, may I ask for you to check the "Read the Docs" tests and why they are failing?

amorehead avatar Apr 17 '25 15:04 amorehead

may I ask for you to check the "Read the Docs" tests and why they are failing?

They can be flaky, so if all the other docs build pass you are essentially fine

Borda avatar Jun 24 '25 07:06 Borda

Thanks, @Borda!

amorehead avatar Jun 27 '25 18:06 amorehead