GPT-SoVITS
GPT-SoVITS copied to clipboard
How to run on GPU the api_v2.py and api.py
Hi How to run on GPU the api_v2.py and api.py?
- I installed CUDA Toolkit and cuDNN installed.
- OI installed pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu129
- I have Python 3.11.13
Everytime it stuck on device: cpu. How I can make sure it will be GPU? Thanks in advance for the answers.
Edit settings in the tts_infer.yaml file. For example, the file should be in path D:\GPT-SoVITS\GPT_SoVITS\configs\tts_infer.yaml. From api_v2.py,
python api_v2.py -a 127.0.0.1 -p 9880 -c GPT_SoVITS/configs/tts_infer.yaml
`执行参数:`
`-a` - `绑定地址, 默认"127.0.0.1"` `#binding address, default set to 127.0.0.1`
`-p` - `绑定端口, 默认9880` `#binding port, default set to 9880`
`-c` - `TTS配置文件路径, 默认"GPT_SoVITS/configs/tts_infer.yaml"` `#TTS configuration file path, default set to GPT_SoVITS/configs/tts_infer.yaml`
Therefore, in your tts_infer.yaml, in may see
custom:
bert_base_path: GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large
cnhuhbert_base_path: GPT_SoVITS/pretrained_models/chinese-hubert-base
device: cuda
is_half: true
t2s_weights_path: GPT_weights_v4/xxx.ckpt
version: v4
vits_weights_path: SoVITS_weights_v4/xxx.pth
I assume that in your config file, the device is set to cpu. To enable inferencing on your GPU, set it to device: cuda.
Make sure you have the correct version of PyTorch installed that supports your CUDA version. (e.g. PyTorch kit for CUDA 12.9) To check if CUDA is available for PyTorch, you may run the following Python code.
import torch
print(str(torch.cuda.is_available()))
Oh yes they definitely need to have a documentation of api_v2.py in English.