GPT-SoVITS
GPT-SoVITS copied to clipboard
Can we have a mps?
I currently ran into a bit of a problem that may have something to do with my CUDA. I'm a MacBook M1 user, so naturally, I don't have a GPU that fits CUDA. Normally I would expect setting CPU to the device as an alternative, which, for the record, I did see the codes, but it did not work smoothly on my device. Torch has launched mps for Apple Silicon users as an alternative to CUDA, I was wondering when the developer can update this.
The following is the Error I received when I was formatting the train set(1-训练集格式化工具). Maybe I got it all wrong why this error happen, please kindly help solve this.
"/Users/improvise/miniconda/envs/GPTSoVits/bin/python" GPT_SoVITS/prepare_datasets/1-get-text.py
"/Users/improvise/miniconda/envs/GPTSoVits/bin/python" GPT_SoVITS/prepare_datasets/1-get-text.py
Traceback (most recent call last):
File "/Users/improvise/Desktop/GPT-SoVITS-main/GPT_SoVITS/prepare_datasets/1-get-text.py", line 53, in <module>
bert_model = bert_model.half().to(device)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2460, in to
return super().to(*args, **kwargs)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1160, in to
Traceback (most recent call last):
File "/Users/improvise/Desktop/GPT-SoVITS-main/GPT_SoVITS/prepare_datasets/1-get-text.py", line 53, in <module>
bert_model = bert_model.half().to(device)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2460, in to
return super().to(*args, **kwargs)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1160, in to
return self._apply(convert)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
return self._apply(convert)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 833, in _apply
module._apply(fn)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
param_applied = fn(param)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/sit module._apply(fn)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
e-packages/torch/nn/modules/module.py", line 1158, in convert
module._apply(fn)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 833, in _apply
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
param_applied = fn(param)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1158, in convert
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Traceback (most recent call last):
File "/Users/improvise/Desktop/GPT-SoVITS-main/webui.py", line 529, in open1abc
with open(txt_path, "r",encoding="utf8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'logs/test01/2-name2text-0.txt'
ps. The output in the folder logs
is like this: /Users/improvise/Desktop/GPT-SoVITS-main/logs/test01/3-bert
, the folder is empty.
M1pro too
Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.
Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.
CPU inference Work!
Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.
I tried manually adjusting all 'cuda:0'
into 'cuda:0' if torch.cuda.is_available() else "cpu"
, and it worked.
Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.
CPU inference Work!
![]()
How? I can't start webui.py
at all.
Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.
CPU inference Work!
How? I can't start
webui.py
at all.
u use python web.py
to start webui