chatterbox icon indicating copy to clipboard operation
chatterbox copied to clipboard

[Bug] Cuda Error: CUBLAS_STATUS_NOT_SUPPORTED

Open AlighieriX opened this issue 5 months ago • 0 comments

Hello, So I just installed chatterbox and when I tried to generate some speech I get the error in the title saying Cublas is not supported. From my understanding chatterbox is usable on amd and invidia and since I'm on amd it should auto detect that I'm using zluda so I'm not understanding why I'm getting that error. It it something I can fix or is chatterbox not compatible with amd/zluda.

Entire error message for reference -

This share link expires in 1 week. For free permanent hosting and GPU upgrades, run gradio deploy from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces) C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\models\lora.py:393: FutureWarning: LoRACompatibleLinear is deprecated and will be removed in version 1.0.0. Use of LoRACompatibleLinear is deprecated. Please switch to PEFT backend by installing PEFT: pip install peft. deprecate("LoRACompatibleLinear", "1.0.0", deprecation_message) loaded PerthNet (Implicit) at step 250,000 loaded PerthNet (Implicit) at step 250,000 Traceback (most recent call last): File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\queueing.py", line 626, in process_events response = await route_utils.call_process_api( File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\route_utils.py", line 350, in call_process_api output = await app.get_blocks().process_api( File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 2235, in process_api result = await self.call_function( File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1746, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 2470, in run_sync_in_worker_thread return await future File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 967, in run result = context.run(func, *args) File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\utils.py", line 917, in wrapper response = f(*args, **kwargs) File "K:\chatterbox\chatterbox-master\chatterbox-master\gradio_tts_app.py", line 31, in generate wav = model.generate( File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\chatterbox\tts.py", line 246, in generate speech_tokens = self.t3.inference( File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\chatterbox\models\t3\t3.py", line 240, in inference embeds, len_cond = self.prepare_input_embeds( File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\chatterbox\models\t3\t3.py", line 89, in prepare_input_embeds cond_emb = self.prepare_conditioning(t3_cond) # (B, len_cond, dim) File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\chatterbox\models\t3\t3.py", line 78, in prepare_conditioning return self.cond_enc(t3_cond) # (B, len_cond, dim) File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\chatterbox\models\t3\modules\cond_enc.py", line 70, in forward cond_spkr = self.spkr_enc(cond.speaker_emb.view(-1, self.hp.speaker_embed_size))[:, None] # (B, 1, dim) File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\Ezra\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 125, in forward return F.linear(input, self.weight, self.bias) RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Cdesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)

AlighieriX avatar Jul 23 '25 20:07 AlighieriX