GPT-SoVITS icon indicating copy to clipboard operation
GPT-SoVITS copied to clipboard

Can we have a mps?

Open Lorre-Ramon opened this issue 1 year ago • 4 comments

I currently ran into a bit of a problem that may have something to do with my CUDA. I'm a MacBook M1 user, so naturally, I don't have a GPU that fits CUDA. Normally I would expect setting CPU to the device as an alternative, which, for the record, I did see the codes, but it did not work smoothly on my device. Torch has launched mps for Apple Silicon users as an alternative to CUDA, I was wondering when the developer can update this.

The following is the Error I received when I was formatting the train set(1-训练集格式化工具). Maybe I got it all wrong why this error happen, please kindly help solve this.

"/Users/improvise/miniconda/envs/GPTSoVits/bin/python" GPT_SoVITS/prepare_datasets/1-get-text.py
"/Users/improvise/miniconda/envs/GPTSoVits/bin/python" GPT_SoVITS/prepare_datasets/1-get-text.py
Traceback (most recent call last):
  File "/Users/improvise/Desktop/GPT-SoVITS-main/GPT_SoVITS/prepare_datasets/1-get-text.py", line 53, in <module>
    bert_model = bert_model.half().to(device)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2460, in to
    return super().to(*args, **kwargs)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1160, in to
Traceback (most recent call last):
  File "/Users/improvise/Desktop/GPT-SoVITS-main/GPT_SoVITS/prepare_datasets/1-get-text.py", line 53, in <module>
    bert_model = bert_model.half().to(device)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2460, in to
    return super().to(*args, **kwargs)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1160, in to
    return self._apply(convert)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
    return self._apply(convert)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 833, in _apply
    module._apply(fn)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
    param_applied = fn(param)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/sit    module._apply(fn)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
e-packages/torch/nn/modules/module.py", line 1158, in convert
    module._apply(fn)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 833, in _apply
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
    param_applied = fn(param)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1158, in convert
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/Users/improvise/miniconda/envs/GPTSoVits/lib/python3.9/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Traceback (most recent call last):
  File "/Users/improvise/Desktop/GPT-SoVITS-main/webui.py", line 529, in open1abc
    with open(txt_path, "r",encoding="utf8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'logs/test01/2-name2text-0.txt'

ps. The output in the folder logs is like this: /Users/improvise/Desktop/GPT-SoVITS-main/logs/test01/3-bert, the folder is empty.

Lorre-Ramon avatar Jan 18 '24 14:01 Lorre-Ramon

M1pro too

hydra-li avatar Jan 18 '24 15:01 hydra-li

Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.

RoversX avatar Jan 18 '24 17:01 RoversX

Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.

CPU inference Work!

Screenshot 2024-01-18 at 2 09 15 PM

RoversX avatar Jan 18 '24 21:01 RoversX

Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.

Screenshot 2024-01-18 at 2 23 51 PM

RoversX avatar Jan 18 '24 21:01 RoversX

I tried manually adjusting all 'cuda:0' into 'cuda:0' if torch.cuda.is_available() else "cpu", and it worked.

Lorre-Ramon avatar Jan 19 '24 01:01 Lorre-Ramon

Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.

CPU inference Work!

Screenshot 2024-01-18 at 2 09 15 PM

How? I can't start webui.py at all.

zhouhao27 avatar Jan 27 '24 02:01 zhouhao27

Have you guys tried CPU inference? I've currently tested So-vits-svc and bert-vits2 for CPU inference.

CPU inference Work! Screenshot 2024-01-18 at 2 09 15 PM

How? I can't start webui.py at all.

u use python web.py to start webui

RoversX avatar Jan 27 '24 19:01 RoversX