ignite icon indicating copy to clipboard operation
ignite copied to clipboard

create_lr_scheduler_with_warmup does not change init_lr to proper value

Open sadra-barikbin opened this issue 3 years ago • 10 comments

Hi,

In order to get expected sequence of lrs from create_lr_scheduler_with_warmup's scheduler, one must not attach it to engine on event EPOCH_COMPLETED because it produces the lr passed to optimizer's constructor at the beginning and then warmup lrs. This hurts warmup procedure. As a workaround, one could use event EPOCH_STARTED but it might not be a good solution.

It seems there should be something like below in line 1017 of param_scheduler.py within the for loop .

param_group['lr'] = warmup_start_value

To reproduce current behaviour:

param = nn.Parameter(torch.Tensor([3.]))
optimizer = torch.optim.SGD([param], lr=1e-3)
scheduler = StepLR(optimizer, 3)
with_warmup_scheduler = create_lr_scheduler_with_warmup(scheduler, warmup_start_value=1e-5, warmup_duration=3)

def process_func(e,b):
  param.grad = torch.Tensor([1.])
  optimizer.step()
trainer = Engine(process_func)
@trainer.on(Events.EPOCH_COMPLETED)
def _():
  print(op.param_groups[0]['lr'])
trainer.add_event_handler(Events.EPOCH_COMPLETED, with_warmup_scheduler)

output:

0.001
1e-05
0.000505
0.001
0.001
0.001
0.0001
0.0001
0.0001
1e-05

sadra-barikbin avatar Jan 24 '22 20:01 sadra-barikbin

@sadra-barikbin have you seen the example from docs: https://pytorch.org/ignite/generated/ignite.handlers.param_scheduler.create_lr_scheduler_with_warmup.html#ignite.handlers.param_scheduler.create_lr_scheduler_with_warmup

I'm not sure if it makes sense to perform warm-up on epochs.

But anyway, if you check our docs, almost all ignite.handlers.param_scheduler are attached to ITERATION_STARTED or EPOCH_STARTED.

vfdev-5 avatar Jan 24 '22 21:01 vfdev-5

Um, you're right! People do warm-up on iterations. The question that arises is that how to do warm-up on iterations and then do normal scheduling on epochs by use of the create_lr_scheduler_with_warmup's scheduler? At this moment, it does only listen to epoch events or iteration events.

sadra-barikbin avatar Jan 25 '22 08:01 sadra-barikbin

@sadra-barikbin you can simply schedule your post-warm-up epoch-wise scheduling using iterations if possible. Otherwise, you can try to combine events as below:

import torch
import torch.optim as optim
from torch.optim.lr_scheduler import ExponentialLR

from ignite.engine import Engine, Events
from ignite.handlers import create_lr_scheduler_with_warmup


def train_step(e, b):
    print(trainer.state.epoch, trainer.state.iteration, " | ", optimizer.param_groups[0]["lr"])

    
trainer = Engine(train_step)
optimizer = optim.SGD([torch.tensor([0.1])], lr=0.1234)


torch_lr_scheduler = ExponentialLR(optimizer=optimizer, gamma=0.5)

data = [0] * 8
epoch_length = len(data)
warmup_duration = 5
scheduler = create_lr_scheduler_with_warmup(torch_lr_scheduler,
                                            warmup_start_value=0.0,
                                            warmup_duration=warmup_duration)

# Trigger scheduler on iteration_started events before reaching warmup_duration
combined_events = Events.ITERATION_STARTED(event_filter=lambda _, __: trainer.state.iteration <= warmup_duration)
# Trigger scheduler on epoch_started events after the warm-up. Epochs are 1-based, thus we do 1 + 
combined_events |= Events.EPOCH_STARTED(event_filter=lambda _, __: trainer.state.epoch > 1 + warmup_duration / epoch_length)
trainer.add_event_handler(combined_events, scheduler)
   
trainer.run(data, max_epochs=10)
> 
1 1  |  0.0
1 2  |  0.03085
1 3  |  0.0617
1 4  |  0.09255
1 5  |  0.1234
1 6  |  0.1234
1 7  |  0.1234
1 8  |  0.1234
2 9  |  0.0617
2 10  |  0.0617
2 11  |  0.0617
2 12  |  0.0617
2 13  |  0.0617
2 14  |  0.0617
2 15  |  0.0617
2 16  |  0.0617
3 17  |  0.03085
3 18  |  0.03085
3 19  |  0.03085
3 20  |  0.03085
3 21  |  0.03085
3 22  |  0.03085
3 23  |  0.03085
3 24  |  0.03085
4 25  |  0.015425
4 26  |  0.015425
4 27  |  0.015425
4 28  |  0.015425
4 29  |  0.015425
4 30  |  0.015425

By the way, our docs on create_lr_scheduler_with_warmup is incorrect, cc @sdesrozis . We should trigger scheduler on ITERATION_STARTED instead of ITERATION_COMPLETED if we would like to avoid the first iteration to take optimizer's default value.

vfdev-5 avatar Jan 25 '22 09:01 vfdev-5

Docs issue is fixed in https://github.com/pytorch/ignite/pull/2442 Let's just add another example with mixing events as in the code above: https://github.com/pytorch/ignite/issues/2441#issuecomment-1020991178

vfdev-5 avatar Jan 27 '22 17:01 vfdev-5

Shall I add the example to docs?

sadra-barikbin avatar Jan 29 '22 11:01 sadra-barikbin

Yes, this could be helpful. Thanks!

vfdev-5 avatar Jan 29 '22 12:01 vfdev-5

Furthermore, as discussed, another option is to follow the sklearn's practice we can put this example into how-to-guides and cross link it in a docstrings.

trsvchn avatar Feb 15 '22 13:02 trsvchn

@trsvchn By "how-to-guides" you mean a place like this?

If yes, so I add the example to this page and then add reference to it in create_lr_scheduler_with_warmup's page. Am I right?

sadra-barikbin avatar Feb 20 '22 10:02 sadra-barikbin

@sadra-barikbin we meant our new how-to-guides page, here

trsvchn avatar Feb 21 '22 16:02 trsvchn

@sadra-barikbin so basically you add example to this examples and it will rendered on the main website (the new one) and we can reference it then.

please check the contributing guide for the examples and do not hesitate to ask for the help!

trsvchn avatar Feb 21 '22 16:02 trsvchn