Failed to load coqui_tts on MacOS
Describe the bug
Unable to load the coqui_tts after installing the requirements
Is there an existing issue for this?
- [X] I have searched the existing issues
Reproduction
- Install coqui extension on MacOS
- enable extension in Textgen-web-ui
Screenshot
No response
Logs
2023-12-01 20:28:57 ERROR:Failed to load the extension "coqui_tts".
Traceback (most recent call last):
File "/Users/me/llm/text-generation-webui/modules/extensions.py", line 41, in load_extensions
extension.setup()
File "/Users/me/llm/text-generation-webui/extensions/coqui_tts/script.py", line 180, in setup
model = load_model()
^^^^^^^^^^^^
File "/Users/me/llm/text-generation-webui/extensions/coqui_tts/script.py", line 76, in load_model
model = TTS(params["model_name"]).to(params["device"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 833, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/llm/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
System Info
MacOS 12.6
Macbook Pro 16, 2021
Apple M1 Pro
16gb RAM
Same here with Macbook Pro 14 from 2021 with M1 Pro and 16 gb RAM.
Just installed oobabooga for the first time with the one click solution (start_macos.sh script) + everything for coqui tts, but as soon as it is supposed to load the TTS model the reported exception from above is thrown
Same here with the same error as above.
Same here on an Apple silicon device. Cannot get coqui_tts to work. Tried manual install as well but doesn't seem to work.
Log:
john83@mac ooba-webui % sh start_macos.sh
15:33:32-153670 INFO Starting Text generation web UI
15:33:32-155748 INFO Loading the extension "gallery"
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Closing server running on port: 7860
15:33:44-883686 INFO Loading the extension "gallery"
15:33:44-888179 INFO Loading the extension "coqui_tts"
[XTTS] Loading XTTS...
tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded. Using model: xtts 15:34:11-110527 ERROR Failed to load the extension "coqui_tts".
Traceback (most recent call last): File "/Users/john/Downloads/ooba-webui/modules/extensions.py", line 46, in load_extensions extension.setup() File "/Users/john/Downloads/ooba-webui/extensions/coqui_tts/script.py", line 168, in setup model = load_model() ^^^^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/extensions/coqui_tts/script.py", line 64, in load_model model = TTS(params["model_name"]).to(params["device"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply module._apply(fn) File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply module._apply(fn) File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply module._apply(fn) [Previous line repeated 2 more times] File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 833, in _apply param_applied = fn(param) ^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1158, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/john/Downloads/ooba-webui/installer_files/env/lib/python3.11/site-packages/torch/cuda/init.py", line 289, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled Running on local URL: http://127.0.0.1:7860
I never could get this to work. FWIW alltalk_tts on Mac is working very well.
change "device": "cuda" to "cpu" in script.py inside extensions/conqui_tts/script.py if you are on MAC because CUDA is for GPU
params = { "activate": True, "autoplay": True, "show_text": False, "remove_trailing_dots": False, "voice": "female_01.wav", "language": "English", "model_name": "tts_models/multilingual/multi-dataset/xtts_v2", "device": "cuda" }
=>
params = { "activate": True, "autoplay": True, "show_text": False, "remove_trailing_dots": False, "voice": "female_01.wav", "language": "English", "model_name": "tts_models/multilingual/multi-dataset/xtts_v2", "device": "cpu" }
working now thank you
This issue has been closed due to inactivity for 2 months. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.