GPT-SoVITS icon indicating copy to clipboard operation
GPT-SoVITS copied to clipboard

在进行一键三连的时候报错

Open GrootLiu opened this issue 1 year ago • 7 comments
trafficstars

  ~/liuao/GPT-SoVITS   main ?1 ❯ python webui.py --listen  7s  GPTSoVits jss41@node41  08:24:14 Running on local URL: http://0.0.0.0:9874 "/home/jss41/miniconda3/envs/GPTSoVits/bin/python" GPT_SoVITS/prepare_datasets/1-get-text.py "/home/jss41/miniconda3/envs/GPTSoVits/bin/python" GPT_SoVITS/prepare_datasets/1-get-text.py Traceback (most recent call last): File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/transformers/utils/hub.py", line 398, in cached_file resolved_file = hf_hub_download( File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn validate_repo_id(arg_value) File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Use repo_type argument if needed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/jss41/liuao/GPT-SoVITS/GPT_SoVITS/prepare_datasets/1-get-text.py", line 56, in tokenizer = AutoTokenizer.from_pretrained(bert_pretrained_dir) File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 767, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 600, in get_tokenizer_config resolved_config_file = cached_file( File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/transformers/utils/hub.py", line 462, in cached_file raise EnvironmentError( OSError: Incorrect path_or_model_id: 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Please provide either the path to a local folder or the repo_id of a model on the Hub. Traceback (most recent call last): File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/transformers/utils/hub.py", line 398, in cached_file resolved_file = hf_hub_download( File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn validate_repo_id(arg_value) File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Use repo_type argument if needed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/jss41/liuao/GPT-SoVITS/GPT_SoVITS/prepare_datasets/1-get-text.py", line 56, in tokenizer = AutoTokenizer.from_pretrained(bert_pretrained_dir) File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 767, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 600, in get_tokenizer_config resolved_config_file = cached_file( File "/home/jss41/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/transformers/utils/hub.py", line 462, in cached_file raise EnvironmentError( OSError: Incorrect path_or_model_id: 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Please provide either the path to a local folder or the repo_id of a model on the Hub. Traceback (most recent call last): File "/home/jss41/liuao/GPT-SoVITS/webui.py", line 580, in open1abc with open(txt_path, "r",encoding="utf8") as f: FileNotFoundError: [Errno 2] No such file or directory: 'logs/liuao/2-name2text-0.txt'

GrootLiu avatar Mar 09 '24 08:03 GrootLiu

使用 Linux 运行该开源程序时报错,系统设置如下: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.3 LTS Release: 22.04 Codename: jammy

GrootLiu avatar Mar 09 '24 08:03 GrootLiu

conda 4.8.3

GrootLiu avatar Mar 09 '24 08:03 GrootLiu

关键句子: OSError: Incorrect path_or_model_id: 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. 请检查该目录下是否有相应的预训练模型.

SapphireLab avatar Mar 09 '24 17:03 SapphireLab

See this suggestion Try your lucky.

arctan90 avatar Mar 11 '24 10:03 arctan90

关键句子: OSError: Incorrect path_or_model_id: 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. 请检查该目录下是否有相应的预训练模型.

这里的文档是有问题的,Linux 文档里没写这些事,我看的 mac 的文档最后搞定了。 话说这里为啥不建成一个 git submodule 呢?克隆的时候直接把子仓库也克隆了不是更好吗?

等有时间我可以按照我踩的坑补充一下 Linux 和 docker 的文档

(●'◡'●)

GrootLiu avatar Mar 11 '24 12:03 GrootLiu

关键句子: OSError: Incorrect path_or_model_id: 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. 请检查该目录下是否有相应的预训练模型.

多谢您的回答~

GrootLiu avatar Mar 11 '24 12:03 GrootLiu

我也遇到了相同的问题,请问你怎么解决的呢

zhangyue2709 avatar Mar 27 '24 09:03 zhangyue2709

同样的问题。

huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Use `repo_type` argument if needed.

kulame avatar Apr 12 '24 08:04 kulame