trying to run Chatterbox on a M1 Mac
I'm trying to run Chatterbox on an M1 Mac. I'm getting this error:
gradio.exceptions.Error: "Error: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU."
I believe I'm running the latest versions. Is there a workaround? Not sure where I would change the code or set a flag as suggested by the error message.
Ran into the same issue.
Resolved by inserting a wrapper for torch.load at the beginning of tts.py.
Track down your tts.py file in chatterbox's site-packages folder.
tts-webui/installer_files/env/lib/python3.10/site-packages/chatterbox/tts.py
At the top under imports, I added the following code:
# --- macOS CPU-only fix ---
old_load = torch.load
def cpu_load(*args, **kwargs):
kwargs["map_location"] = torch.device("cpu")
return old_load(*args, **kwargs)
torch.load = cpu_load
# --- End fix ---
Also, make sure the Generate token backend setting in Advanced Settings isn't set to any of the cudagraphs options.
Thanks but now I get this: Error: Dimension specified as -1 but tensor has no dimensions.
What are the other model settings you use?
The multilingual Chatterbox model seems to have a bug related to systems without a CUDA-capable GPU https://github.com/resemble-ai/chatterbox/issues/351, https://github.com/resemble-ai/chatterbox/issues/357. I ran into this with an amd64 Linux box and the Docker image from ghcr. There is a PR that should fix the problem; it's only two lines https://github.com/resemble-ai/chatterbox/pull/364. I tried it, but then I got a new error, “CUDA version mismatch” or something like that. Maybe it will work better in your case...