so-vits-svc icon indicating copy to clipboard operation
so-vits-svc copied to clipboard

KeyError: "param 'initial_lr' is not specified in param_groups[0] when resuming an optimizer"

Open Likkkez opened this issue 1 year ago • 2 comments

I'm trying to finetune 4.0-v2 using this checkpoint I found https://huggingface.co/cr941131/sovits-4.0-v2-hubert/tree/main (not sure if its good or not) But when I try to start training this error happens:

Traceback (most recent call last):
  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
    fn(i, *args)
  File "/media/manjaro/NVME_2tb/NeuralNetworks/so-vits-svc-v2-44100/train.py", line 112, in run
    scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 583, in __init__
    super(ExponentialLR, self).__init__(optimizer, last_epoch, verbose)
  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 42, in __init__
    raise KeyError("param 'initial_lr' is not specified "
KeyError: "param 'initial_lr' is not specified in param_groups[0] when resuming an optimizer"

Where can I find official checkpoints if that one is bad?

Likkkez avatar Mar 13 '23 10:03 Likkkez

Distribution of pretrained models is under planning.

Miuzarte avatar Mar 13 '23 23:03 Miuzarte

I'm trying to finetune 4.0-v2 using this checkpoint I found https://huggingface.co/cr941131/sovits-4.0-v2-hubert/tree/main

(not sure if its good or not)

But when I try to start training this error happens:


Traceback (most recent call last):

  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap

    fn(i, *args)

  File "/media/manjaro/NVME_2tb/NeuralNetworks/so-vits-svc-v2-44100/train.py", line 112, in run

    scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)

  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 583, in __init__

    super(ExponentialLR, self).__init__(optimizer, last_epoch, verbose)

  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 42, in __init__

    raise KeyError("param 'initial_lr' is not specified "

KeyError: "param 'initial_lr' is not specified in param_groups[0] when resuming an optimizer"

Where can I find official checkpoints if that one is bad?

Karimsultan avatar Mar 14 '23 15:03 Karimsultan

For some scary reason, we removed the pre-training model, and there is currently no official way to get the pre-training model.

NaruseMioShirakana avatar Mar 16 '23 08:03 NaruseMioShirakana