ChatTTS-Forge
ChatTTS-Forge copied to clipboard
[ISSUE] 在M芯片的MacBook上报错:RuntimeError: Placeholder storage has not been allocated on MPS device!
确认清单
- [X] 我已经阅读过 README.md 和 dependencies.md 文件
- [X] 我已经确认之前没有 issue 或 discussion 涉及此 BUG
- [X] 我已经确认问题发生在最新代码或稳定版本中
- [X] 我已经确认问题与 API 无关
- [X] 我已经确认问题与 WebUI 无关
- [X] 我已经确认问题与 Finetune 无关
你的issues
在M芯片的MacBook上运行webui.py后,不调整任何设置,直接点击生成按钮,报错:
Traceback (most recent call last): File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/gradio/queueing.py", line 388, in call_prediction output = await route_utils.call_process_api( File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/gradio/route_utils.py", line 219, in call_process_api output = await app.get_blocks().process_api( File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/gradio/blocks.py", line 1437, in process_api result = await self.call_function( File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/gradio/blocks.py", line 1109, in call_function prediction = await anyio.to_thread.run_sync( File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, *args) File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/gradio/utils.py", line 641, in wrapper response = f(*args, **kwargs) File "/Users/atfa/ai/ChatTTS-Forge/modules/webui/tts_tab.py", line 21, in tts_generate_with_history audio = tts_generate(*args, **kwargs) File "/Users/atfa/ai/ChatTTS-Forge/modules/webui/webui_utils.py", line 297, in tts_generate sample_rate, audio_data = handler.enqueue() File "/Users/atfa/ai/ChatTTS-Forge/modules/core/handler/TTSHandler.py", line 81, in enqueue return self.pipeline.generate() File "/Users/atfa/ai/ChatTTS-Forge/modules/core/pipeline/pipeline.py", line 45, in generate synth.start_generate() File "/Users/atfa/ai/ChatTTS-Forge/modules/core/pipeline/generate/BatchSynth.py", line 47, in start_generate self.start_generate_sync() File "/Users/atfa/ai/ChatTTS-Forge/modules/core/pipeline/generate/BatchSynth.py", line 60, in start_generate_sync self.generator.generate() File "/Users/atfa/ai/ChatTTS-Forge/modules/core/pipeline/generate/BatchGenerate.py", line 47, in generate self.generate_batch(batch) File "/Users/atfa/ai/ChatTTS-Forge/modules/core/pipeline/generate/BatchGenerate.py", line 59, in generate_batch results = model.generate_batch(segments=segments, context=self.context) File "/Users/atfa/ai/ChatTTS-Forge/modules/core/models/tts/ChatTtsModel.py", line 56, in generate_batch return self.generate_batch_base(segments, context, stream=False) File "/Users/atfa/ai/ChatTTS-Forge/modules/core/models/tts/ChatTtsModel.py", line 130, in generate_batch_base results = infer.generate_audio( File "/Users/atfa/ai/ChatTTS-Forge/modules/core/models/zoo/ChatTTSInfer.py", line 343, in generate_audio data = self._generate_audio( File "/Users/atfa/ai/ChatTTS-Forge/modules/core/models/zoo/ChatTTSInfer.py", line 318, in _generate_audio return self.infer( File "/Users/atfa/ai/ChatTTS-Forge/modules/core/models/zoo/ChatTTSInfer.py", line 104, in infer return next(res_gen) File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 36, in generator_context response = gen.send(None) File "/Users/atfa/ai/ChatTTS-Forge/modules/core/models/zoo/ChatTTSInfer.py", line 152, in _infer for result in self.instance._infer_code( File "/Users/atfa/ai/ChatTTS-Forge/modules/repos_static/ChatTTS/ChatTTS/core.py", line 536, in _infer_code emb = gpt(input_ids, text_mask) File "/Users/atfa/ai/ChatTTS-Forge/modules/repos_static/ChatTTS/ChatTTS/model/gpt.py", line 157, in __call__ return super().__call__(input_ids, text_mask) File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/Users/atfa/ai/ChatTTS-Forge/modules/repos_static/ChatTTS/ChatTTS/model/gpt.py", line 164, in forward emb_text: torch.Tensor = self.emb_text( File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 164, in forward return F.embedding( File "/Users/atfa/miniconda3/envs/ChatTTS-Forge/lib/python3.10/site-packages/torch/nn/functional.py", line 2267, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Placeholder storage has not been allocated on MPS device!
pytorch版本:2.4.0
python版本:3.10
Mac版本:Darwin KRIS-MacBook-Air-M3.local 23.5.0 Darwin Kernel Version 23.5.0: Wed May 1 20:14:59 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8122 arm64