cp-vton-plus
cp-vton-plus copied to clipboard
LR scheduling is used improperly
In the paper, the authors claim The learning rate was first fixed at 0.0001 for 100K steps and then linearly decays to zero for the remaining steps.
However, the step method of the LR schedulers (declared on 1 2) is never called, leading to a constant learning rate throughout training.
I believe the fix is to add scheduler.step() on lines 105 and 172 of train.py (PyTorch documentation regarding LR schedulers can be found here).
If ok by the authors I will create a PR addressing this issue.
On a closer look, this issue also affects the CP-VTON repository.
Hi @Erroler , thank you very much for pointing it out. Sure, please go ahead and submit a PR. Thank you.