GPT-SoVITS
GPT-SoVITS copied to clipboard
Never-ending pagination loading (推理頁面啟動)
我想直接使用預訓練模型測試聲音克隆,然而頁面無發開啟。
可以看到底下的運行時間已經超越300秒,但我還沒有得到推理頁面。
我有嘗試呼叫http://localhost:9871/、http://localhost:9872/、http://localhost:9873/,都得到"無法連上這個網站。localhost 拒絕連線。"
抱歉,補充一下模型資料夾內容。
另外,這是按照官方colab的Git方法獲取的模型檔案。可能我的擺放方法有些錯誤,請求各位厲害人士解惑,感謝。
命令行有提示什么吗?
命令行有提示什么吗?
很神奇,並沒有任何報錯。我查看了後台的RAM、CPU、硬碟IO,沒有明顯占用,數值與待機無異。 這讓我很頭痛,因為它是在"正常運作",但沒有正常的消耗硬體資源。
按照本项目的文件结构,即 pretrained_models 文件夹下并没有 GPT-SoVITS 文件夹:
GPT-SoVITS
└─ GPT_SoVITS
└─ pretrained_models
├─ .gitignore
├─ chinese-hubert-base
│ ├─ config.json
│ ├─ preprocessor_config.json
│ └─ pytorch_model.bin
├─ chinese-roberta-wwm-ext-large
│ ├─ config.json
│ ├─ pytorch_model.bin
│ └─ tokenizer.json
├─ s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt
├─ s2D488k.pth
└─ s2G488k.pth
考虑移动后再进行尝试。
用类似的文件组织方式未能复现你的问题,因为会出现报错。
huggingface_hub.utils._validators.HFValidationError:
Repo id must be in the form 'repo_name' or 'namespace/repo_name':
'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'.
Use `repo_type` argument if needed.
我嘗試了您說的將資料夾解放,它並沒有按照預期的出現錯誤、又或是正常開啟推理頁面。
那么直接从命令行能够运行吗? 例如在项目主目录下唤出 cmd 界面执行
python GPT_SoVITS/inference_webui.py
正常应该会输出以下信息:
UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
Some weights of the model checkpoint at GPT_SoVITS/pretrained_models/chinese-hubert-base were not used when initializing HubertModel: ['encoder.pos_conv_embed.conv.weight_g', 'encoder.pos_conv_embed.conv.weight_v']
- This IS expected if you are initializing HubertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing HubertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of HubertModel were not initialized from the model checkpoint at GPT_SoVITS/pretrained_models/chinese-hubert-base and are newly initialized: ['encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'encoder.pos_conv_embed.conv.parametrizations.weight.original1']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
<All keys matched successfully>
Number of parameter: 77.49M
Running on local URL: http://0.0.0.0:9872
若不行可以考虑直接使用整合包,不必自行配置环境。
感覺有啟動了,至少有顯示錯誤。
這是要更改 pytorch 版本嗎?
將檔案刪掉之後就可以啟動了。
非常感謝各位大神的幫助,應該可以關閉了。 再次感謝。