text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

Failed to load coqui_tts on MacOS

Open jcluff-prgx opened this issue 2 years ago • 6 comments

Describe the bug

Unable to load the coqui_tts after installing the requirements

Is there an existing issue for this?

  • [X] I have searched the existing issues

Reproduction

  1. Install coqui extension on MacOS
  2. enable extension in Textgen-web-ui

Screenshot

No response

Logs

2023-12-01 20:28:57 ERROR:Failed to load the extension "coqui_tts".
Traceback (most recent call last):
  File "/Users/me/llm/text-generation-webui/modules/extensions.py", line 41, in load_extensions
    extension.setup()
  File "/Users/me/llm/text-generation-webui/extensions/coqui_tts/script.py", line 180, in setup
    model = load_model()
            ^^^^^^^^^^^^
  File "/Users/me/llm/text-generation-webui/extensions/coqui_tts/script.py", line 76, in load_model
    model = TTS(params["model_name"]).to(params["device"])
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  [Previous line repeated 2 more times]
  File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 833, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")

System Info

MacOS 12.6
Macbook Pro 16, 2021
Apple M1 Pro
16gb RAM

jcluff-prgx avatar Dec 02 '23 02:12 jcluff-prgx

Same here with Macbook Pro 14 from 2021 with M1 Pro and 16 gb RAM.

Just installed oobabooga for the first time with the one click solution (start_macos.sh script) + everything for coqui tts, but as soon as it is supposed to load the TTS model the reported exception from above is thrown

mebe1012 avatar Dec 02 '23 21:12 mebe1012

Same here with the same error as above.

spoonyv avatar Dec 12 '23 16:12 spoonyv

Same here on an Apple silicon device. Cannot get coqui_tts to work. Tried manual install as well but doesn't seem to work.

Log:

john83@mac ooba-webui % sh start_macos.sh 15:33:32-153670 INFO Starting Text generation web UI
15:33:32-155748 INFO Loading the extension "gallery"
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Closing server running on port: 7860 15:33:44-883686 INFO Loading the extension "gallery"
15:33:44-888179 INFO Loading the extension "coqui_tts"
[XTTS] Loading XTTS...

tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded. Using model: xtts 15:34:11-110527 ERROR Failed to load the extension "coqui_tts".
Traceback (most recent call last): File "/Users/john/Downloads/ooba-webui/modules/extensions.py", line 46, in load_extensions extension.setup() File "/Users/john/Downloads/ooba-webui/extensions/coqui_tts/script.py", line 168, in setup model = load_model() ^^^^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/extensions/coqui_tts/script.py", line 64, in load_model model = TTS(params["model_name"]).to(params["device"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply module._apply(fn) File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply module._apply(fn) File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply module._apply(fn) [Previous line repeated 2 more times] File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 833, in _apply param_applied = fn(param) ^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1158, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/cuda/init.py", line 289, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled Running on local URL: http://127.0.0.1:7860

CyborgArmy83 avatar Dec 22 '23 14:12 CyborgArmy83

I never could get this to work. FWIW alltalk_tts on Mac is working very well.

spoonyv avatar Dec 31 '23 16:12 spoonyv

change "device": "cuda" to "cpu" in script.py inside extensions/conqui_tts/script.py if you are on MAC because CUDA is for GPU

params = { "activate": True, "autoplay": True, "show_text": False, "remove_trailing_dots": False, "voice": "female_01.wav", "language": "English", "model_name": "tts_models/multilingual/multi-dataset/xtts_v2", "device": "cuda" }

=>

params = { "activate": True, "autoplay": True, "show_text": False, "remove_trailing_dots": False, "voice": "female_01.wav", "language": "English", "model_name": "tts_models/multilingual/multi-dataset/xtts_v2", "device": "cpu" }

rohitsainier avatar Jan 18 '24 12:01 rohitsainier

working now thank you

spoonyv avatar Feb 02 '24 04:02 spoonyv

This issue has been closed due to inactivity for 2 months. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

github-actions[bot] avatar Apr 02 '24 23:04 github-actions[bot]