verl
verl copied to clipboard
[feat][BREAKING] Megatron: Support learning rate scheduler
Checklist Before Starting
- [x] Search for similar PR(s).
What does this PR do?
Support lr scheduler in megatron
High-Level Design
Still got some difference with FSDP's optimizer in APIs
Specific Changes
List the specific changes.
API
optim:
lr: 1e-6
clip_grad: 1.0
total_training_steps: -1 # must be override by program
lr_warmup_init: 0.0 # initial learning rate for warmup, default to 0.0
lr_warmup_steps: -1 # Prioritized. Negative values mean delegating to lr_warmup_steps_ratio.
lr_warmup_steps_ratio: 0. # the total steps will be injected during runtime
lr_decay_steps: null
lr_decay_style: linear # select from constant/linear/cosine/inverse_square_root
min_lr: 0.0 # minimum learning rate, default to 0.0
weight_decay: 0.01
weight_decay_incr_style: constant # select from constant/linear/cosine
lr_wsd_decay_style: exponential # select from constant/exponential/cosine
lr_wsd_decay_steps: null
use_checkpoint_opt_param_scheduler: False # use checkpoint optimizer parameter scheduler
Notice that there are some differences in APIs between Megatron optimizer and FSDP optimizer.
- Megatron optimizer scheduler names the period after lr_warmup as lr_decay_steps, so the
warmup_styleactually means the style of lr decay after warmup. - Megatron optimizer also support weight decay decay mechanism
use_checkpoint_opt_param_schedulerdetermines whether to use the checkpoint optimizer parameter scheduler. If set to True, the optimizer parameter scheduler will be saved in the checkpoint and loaded from the checkpoint during resuming training.
Usage Example
Provide usage example(s) for easier usage.
# Add code snippet or script demonstrating how to use this
Test
For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc.
Additional Info.
- Issue Number: Fixes issue # or discussion # if any.
- Training: [Note which backend this PR will affect: FSDP, Megatron, both, or none]
- Inference: [Note which backend this PR will affect: vLLM, SGLang, both, or none]
Checklist Before Submitting
- [x] Read the Contribute Guide.
- [x] Apply pre-commit checks.
- [x] Add
[BREAKING]to the PR title if it breaks any API. - [x] Update the documentation about your changes in the docs.
- [x] Add CI test(s) if necessary.