不建议使用过高的 PyTorch 版本
PyTorch 2.6+
当使用 PyTorch 2.6 及以上版本时提取音高环节会报错
Traceback (most recent call last):
File "D:\RVC\infer\modules\train\extract_feature_print.py", line 89, in <module>
models, saved_cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(
File "D:\RVC\.venv\lib\site-packages\fairseq\checkpoint_utils.py", line 425, in load_model_ensemble_and_task
state = load_checkpoint_to_cpu(filename, arg_overrides)
File "D:\RVC\.venv\lib\site-packages\fairseq\checkpoint_utils.py", line 315, in load_checkpoint_to_cpu
state = torch.load(f, map_location=torch.device("cpu"))
File "D:\RVC\.venv\lib\site-packages\torch\serialization.py", line 1470, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL fairseq.data.dictionary.Dictionary was not an allowed global by default. Please use `torch.serialization.add_safe_globals([Dictionary])` or the `torch.serialization.safe_globals([Dictionary])` context manager to allowlist this global if you trust this class/function.
因为在 PyTorch 2.6 中,torch.load 的 weights_only 参数默认值由 False 变为了 True,需要手动修改项目中各处加载模型时的参数
例如,需要修改 fairseq.checkpoint_utils.load_checkpoint_to_cpu 中的 torch.load,添加 weights_only=False 才能正常提取音高
with open(local_path, "rb") as f:
- state = torch.load(f, map_location=torch.device("cpu"))
+ state = torch.load(f, map_location=torch.device("cpu"), weights_only=False)
PyTorch 2.4+
当使用 PyTorch 2.4 及以上时训练环节会报错
Traceback (most recent call last):
File "C:\Users\*****\AppData\Roaming\uv\python\cpython-3.9.21-windows-x86_64-none\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\*****\AppData\Roaming\uv\python\cpython-3.9.21-windows-x86_64-none\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "D:\RVC\infer\modules\train\train.py", line 129, in run
dist.init_process_group(
File "D:\RVC\.venv\lib\site-packages\torch\distributed\c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "D:\RVC\.venv\lib\site-packages\torch\distributed\c10d_logger.py", line 95, in wrapper
func_return = func(*args, **kwargs)
File "D:\RVC\.venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 1714, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "D:\RVC\.venv\lib\site-packages\torch\distributed\rendezvous.py", line 274, in _env_rendezvous_handler
store = _create_c10d_store(
File "D:\RVC\.venv\lib\site-packages\torch\distributed\rendezvous.py", line 194, in _create_c10d_store
return TCPStore(
RuntimeError: use_libuv was requested but PyTorch was build without libuv support
具体原因可以看 https://github.com/RVC-Boss/GPT-SoVITS/issues/1357 ,大概就是高版本的 PyTorch 构建时不再添加 libuv 支持
解决方案是用任何方法添加环境变量 USE_LIBUV=0
例如:直接在 infer-web.py 中添加 os.environ["USE_LIBUV"] = "0"
是的,我自己本地的环境使用的是:
torch==2.1.0
torchvision==0.16.0
torchaudio==2.1.0
目前Debug下来没啥问题。
这个repo的pip 包都太老了。我遇到了pip版本过高导致无法安装fairseq。matplot版本太高导致 训练脚本报错等其他问题。 强烈建议repo开发者考虑分开发行 user infer runtime, 和developer training runtime。
普通只有推理需求的用户用onnx模型,打包onnx runtime环境。这样usr发行包会小很多。更易分发
对有模型训练需求的用户使用torch runtime,且积极更新,支持新的包特性。
不过感觉这个repo的开发者人数不多,感觉支持不过来。如果 语音合成技术有重大突破的话应当考虑重新整理一下项目框架。 太乱了,历史欠账太多。
这个repo的pip 包都太老了。我遇到了pip版本过高导致无法安装fairseq。matplot版本太高导致 训练脚本报错等其他问题。 强烈建议repo开发者考虑分开发行 user infer runtime, 和developer training runtime。
普通只有推理需求的用户用onnx模型,打包onnx runtime环境。这样usr发行包会小很多。更易分发
对有模型训练需求的用户使用torch runtime,且积极更新,支持新的包特性。
不过感觉这个repo的开发者人数不多,感觉支持不过来。如果 语音合成技术有重大突破的话应当考虑重新整理一下项目框架。 太乱了,历史欠账太多。
我已经在尝试向这个方向努力了,将Pipeline整合成一个torch.nn.Module,但是似乎有点问题:https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/2483
这个repo的pip 包都太老了。我遇到了pip版本过高导致无法安装fairseq。matplot版本太高导致 训练脚本报错等其他问题。 强烈建议repo开发者考虑分开发行 user infer runtime, 和developer training runtime。
普通只有推理需求的用户用onnx模型,打包onnx runtime环境。这样usr发行包会小很多。更易分发
对有模型训练需求的用户使用torch runtime,且积极更新,支持新的包特性。
不过感觉这个repo的开发者人数不多,感觉支持不过来。如果 语音合成技术有重大突破的话应当考虑重新整理一下项目框架。 太乱了,历史欠账太多。
这个仓库现在基本已经没人管了,原因详见 #2109。如果想开发新功能欢迎来https://github.com/fumiama/Retrieval-based-Voice-Conversion-WebUI
这个repo的pip 包都太老了。我遇到了pip版本过高导致无法安装fairseq。matplot版本太高导致 训练脚本报错等其他问题。 强烈建议repo开发者考虑分开发行 user infer runtime, 和developer training runtime。 普通只有推理需求的用户用onnx模型,打包onnx runtime环境。这样usr发行包会小很多。更易分发 对有模型训练需求的用户使用torch runtime,且积极更新,支持新的包特性。 不过感觉这个repo的开发者人数不多,感觉支持不过来。如果 语音合成技术有重大突破的话应当考虑重新整理一下项目框架。 太乱了,历史欠账太多。
这个仓库现在基本已经没人管了,原因详见 #2109。如果想开发新功能欢迎来https://github.com/fumiama/Retrieval-based-Voice-Conversion-WebUI
谢谢,已经在尝试Fork一个自己的RVC了,使用纯Pytorch+ONNX实现。RMS算法已经用Qwen2重写为纯Pytorch实现了。Hubert已经使用了Xenova提供的量化模型了,RMVPE模型也使用了仓库的rmvpe.onxx和由DeepSeekR1重写的梅尔特征提取方法实现了。现在就差把这个几个碎片合在一起验证最终结果了。
这个repo的pip 包都太老了。我遇到了pip版本过高导致无法安装fairseq。matplot版本太高导致 训练脚本报错等其他问题。 强烈建议repo开发者考虑分开发行 user infer runtime, 和developer training runtime。 普通只有推理需求的用户用onnx模型,打包onnx runtime环境。这样usr发行包会小很多。更易分发 对有模型训练需求的用户使用torch runtime,且积极更新,支持新的包特性。 不过感觉这个repo的开发者人数不多,感觉支持不过来。如果 语音合成技术有重大突破的话应当考虑重新整理一下项目框架。 太乱了,历史欠账太多。
这个仓库现在基本已经没人管了,原因详见 #2109。如果想开发新功能欢迎来https://github.com/fumiama/Retrieval-based-Voice-Conversion-WebUI
佬,想求证一下这个Feats输出看起来正常吗?我没完全读懂feats在vc_single里的作用,因此想确认一下Xenova这个权重用在RVC Pipeline里是否可行?
is this has been resolved? my versions are
torch 2.1.0+cu118
torchaudio 2.1.0+cu118
torchvision 0.16.0+cu118
i still get this..
/my_workspace/server/Mangio-RVC-Fork/logs/olamago
load model(s) from hubert_base.pt
Traceback (most recent call last):
File "/my_workspace/server/Mangio-RVC-Fork/extract_feature_print.py", line 73, in <module>
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 425, in load_model_ensemble_and_task
state = load_checkpoint_to_cpu(filename, arg_overrides)
File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 315, in load_checkpoint_to_cpu
state = torch.load(f, map_location=torch.device("cpu"))
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1470, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL fairseq.data.dictionary.Dictionary was not an allowed global by default. Please use `torch.serialization.add_safe_globals([Dictionary])` or the `torch.serialization.safe_globals([Dictionary])` context manager to allowlist this global if you trust this class/function.
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
is this has been resolved? my versions are
torch 2.1.0+cu118 torchaudio 2.1.0+cu118 torchvision 0.16.0+cu118i still get this..
/my_workspace/server/Mangio-RVC-Fork/logs/olamago load model(s) from hubert_base.pt Traceback (most recent call last): File "/my_workspace/server/Mangio-RVC-Fork/extract_feature_print.py", line 73, in <module> models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 425, in load_model_ensemble_and_task state = load_checkpoint_to_cpu(filename, arg_overrides) File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 315, in load_checkpoint_to_cpu state = torch.load(f, map_location=torch.device("cpu")) File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1470, in load raise pickle.UnpicklingError(_get_wo_message(str(e))) from None _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, �[1mdo those steps only if you trust the source of the checkpoint�[0m. (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL fairseq.data.dictionary.Dictionary was not an allowed global by default. Please use `torch.serialization.add_safe_globals([Dictionary])` or the `torch.serialization.safe_globals([Dictionary])` context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
Which Python enviroment that you use?Conda or virtual env? And which kinda file you use?Jupyter or normal .py?
I mean if you really did downgrade your Pytorch,it's truly like you didn't reactivate your enviroment as far as i know.
i am not using any venv
is this has been resolved? my versions are
torch 2.1.0+cu118 torchaudio 2.1.0+cu118 torchvision 0.16.0+cu118i still get this..
/my_workspace/server/Mangio-RVC-Fork/logs/olamago load model(s) from hubert_base.pt Traceback (most recent call last): File "/my_workspace/server/Mangio-RVC-Fork/extract_feature_print.py", line 73, in <module> models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 425, in load_model_ensemble_and_task state = load_checkpoint_to_cpu(filename, arg_overrides) File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 315, in load_checkpoint_to_cpu state = torch.load(f, map_location=torch.device("cpu")) File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1470, in load raise pickle.UnpicklingError(_get_wo_message(str(e))) from None _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, �[1mdo those steps only if you trust the source of the checkpoint�[0m. (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL fairseq.data.dictionary.Dictionary was not an allowed global by default. Please use `torch.serialization.add_safe_globals([Dictionary])` or the `torch.serialization.safe_globals([Dictionary])` context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.Which Python enviroment that you use?Conda or virtual env? And which kinda file you use?Jupyter or normal .py?
I mean if you really did downgrade your Pytorch,it's truly like you didn't reactivate your enviroment as far as i know.
i am not using any venv
is this has been resolved? my versions are
torch 2.1.0+cu118 torchaudio 2.1.0+cu118 torchvision 0.16.0+cu118i still get this..
/my_workspace/server/Mangio-RVC-Fork/logs/olamago load model(s) from hubert_base.pt Traceback (most recent call last): File "/my_workspace/server/Mangio-RVC-Fork/extract_feature_print.py", line 73, in <module> models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 425, in load_model_ensemble_and_task state = load_checkpoint_to_cpu(filename, arg_overrides) File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 315, in load_checkpoint_to_cpu state = torch.load(f, map_location=torch.device("cpu")) File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1470, in load raise pickle.UnpicklingError(_get_wo_message(str(e))) from None _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, �[1mdo those steps only if you trust the source of the checkpoint�[0m. (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL fairseq.data.dictionary.Dictionary was not an allowed global by default. Please use `torch.serialization.add_safe_globals([Dictionary])` or the `torch.serialization.safe_globals([Dictionary])` context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.Which Python enviroment that you use?Conda or virtual env? And which kinda file you use?Jupyter or normal .py? I mean if you really did downgrade your Pytorch,it's truly like you didn't reactivate your enviroment as far as i know.
Is that anywhere else you've installed Pytorch?Or any packages? This looks like your Python doesn't load your 2.1.0 package correctly.
is this has been resolved? my versions are
torch 2.1.0+cu118 torchaudio 2.1.0+cu118 torchvision 0.16.0+cu118i still get this..
/my_workspace/server/Mangio-RVC-Fork/logs/olamago load model(s) from hubert_base.pt Traceback (most recent call last): File "/my_workspace/server/Mangio-RVC-Fork/extract_feature_print.py", line 73, in <module> models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 425, in load_model_ensemble_and_task state = load_checkpoint_to_cpu(filename, arg_overrides) File "/usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py", line 315, in load_checkpoint_to_cpu state = torch.load(f, map_location=torch.device("cpu")) File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1470, in load raise pickle.UnpicklingError(_get_wo_message(str(e))) from None _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, �[1mdo those steps only if you trust the source of the checkpoint�[0m. (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL fairseq.data.dictionary.Dictionary was not an allowed global by default. Please use `torch.serialization.add_safe_globals([Dictionary])` or the `torch.serialization.safe_globals([Dictionary])` context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
Try list your package infomations with pip.
pip show torch
so i tried to run rvc in the torch 2.6.0:
my torch versions are as follows:
torch 2.6.0
torchaudio 2.6.0
okay right now I'm trying a couple of ways to resolve this ..
1st way..:
in the path /usr/local/lib/python3.10/dist-packages/fairseq/checkpoint_utils.py and modify line 315 to explicitly set weights_only=False
1st one worked for me
2nd way:
lower the torch version
i did
pip uninstall -y torch torchaudio torchvision
and
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --force-reinstall --index-url https://download.pytorch.org/whl/cu118
didnt worked for me...
3rd way:
in the exatract_feature_print.py
i added this safe globals before model loading
import torch.serialization
from fairseq.data.dictionary import Dictionary
torch.serialization.add_safe_globals([Dictionary])
this was on these versions
torch 2.6.0
torchaudio 2.6.0
also i have created a fork if you want to use it for your own.. https://github.com/anurag12-webster/Mangio-RVC-Tweaks
this worked for me..