GPT-SoVITS icon indicating copy to clipboard operation
GPT-SoVITS copied to clipboard

RuntimeError: Parent directory logs/ xxx does not exist.

Open zhangzhiguo0603 opened this issue 1 year ago • 4 comments

重新下载了GPT_SoVITS\s2_train.py和GPT_SoVITS\utils.py文件,但是还是出现下面报错。

INFO:徐大夫第二次:Saving model and optimizer state at iteration 4 to logs/徐大夫第二次/logs_s2\G_233333333333.pth
Traceback (most recent call last):
  File "H:\GPT-SoVITS-beta\GPT_SoVITS\s2_train.py", line 600, in <module>
    main()
  File "H:\GPT-SoVITS-beta\GPT_SoVITS\s2_train.py", line 56, in main
    mp.spawn(
  File "H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\multiprocessing\spawn.py", line 239, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\multiprocessing\spawn.py", line 197, in start_processes
    while not context.join():
  File "H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\multiprocessing\spawn.py", line 160, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\multiprocessing\spawn.py", line 69, in _wrap
    fn(i, *args)
  File "H:\GPT-SoVITS-beta\GPT_SoVITS\s2_train.py", line 254, in run
    train_and_evaluate(
  File "H:\GPT-SoVITS-beta\GPT_SoVITS\s2_train.py", line 470, in train_and_evaluate
    utils.save_checkpoint(
  File "H:\GPT-SoVITS-beta\GPT_SoVITS\utils.py", line 78, in save_checkpoint
    torch.save(
  File "H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\serialization.py", line 440, in save
    with _open_zipfile_writer(f) as opened_zipfile:
  File "H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\serialization.py", line 315, in _open_zipfile_writer
    return container(name_or_buffer)
  File "H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\serialization.py", line 288, in __init__
    super().__init__(torch._C.PyTorchFileWriter(str(name)))
RuntimeError: Parent directory logs/徐大夫第二次 does not exist.

运行过程中还有些输出报错,请开发者大佬查阅。

H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\optim\lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
0it [00:00, ?it/s]H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
  return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore[attr-defined]
H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
  return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore[attr-defined]
H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
  return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore[attr-defined]
H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
  return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore[attr-defined]
H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
  return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore[attr-defined]
H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
  return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore[attr-defined]
[W C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\distributed\c10d\reducer.cpp:1307] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator ())
H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
  return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore[attr-defined]
H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\functional.py:641: UserWarning: ComplexHalf support is experimental and many operators don't support it yet. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\EmptyTensor.cpp:32.)
  return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore[attr-defined]
H:\GPT-SoVITS-beta\runtime\lib\site-packages\torch\autograd\__init__.py:200: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed.  This is not an error, but may impair performance.
grad.sizes() = [1, 9, 96], strides() = [57120, 96, 1]
bucket_view.sizes() = [1, 9, 96], strides() = [864, 96, 1] (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\distributed\c10d\reducer.cpp:337.)
  Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass

zhangzhiguo0603 avatar Feb 16 '24 10:02 zhangzhiguo0603

以上为SoVITS训练时出现的报错

zhangzhiguo0603 avatar Feb 16 '24 10:02 zhangzhiguo0603

我不会python语法,但是我看文档有.lr_scheduler.StepLR()函数没看到lr_scheduler.step(),是不是我的版本太高了,导致的warnings??

zhangzhiguo0603 avatar Feb 16 '24 13:02 zhangzhiguo0603

保存模型文件,当模型文件路径中存在中文会报错,之前进行了修复,但是修复不彻底(只修复了数据预处理,未修复最终模型保存)。 可以暂时把模型名改为英文,后续我更新代码修复。

RVC-Boss avatar Feb 16 '24 14:02 RVC-Boss

嗯嗯感谢大佬深夜回复,我目前把torch进行了降级,暂时能执行成功了。

conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
pip show torch
Name: torch
Version: 2.1.1+cu121

zhangzhiguo0603 avatar Feb 16 '24 15:02 zhangzhiguo0603