ChatTTS-ui icon indicating copy to clipboard operation
ChatTTS-ui copied to clipboard

Win 10 在合成声音时弹出错误信息“internal server error”

Open surjayc opened this issue 11 months ago • 2 comments
trafficstars

以下是错误信息,请问如何解决?

ERROR - Exception on /tts [POST] Traceback (most recent call last): File "flask\app.py", line 1473, in wsgi_app File "flask\app.py", line 882, in full_dispatch_request File "flask\app.py", line 880, in full_dispatch_request File "flask\app.py", line 865, in dispatch_request File "app.py", line 245, in tts File "ChatTTS\core.py", line 206, in infer File "ChatTTS\core.py", line 343, in _infer File "ChatTTS\core.py", line 566, in _refine_text File "ChatTTS\model\gpt.py", line 421, in generate File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) File "torch\nn\utils\parametrize.py", line 366, in get_parametrized return get_cached_parametrization(parametrization) File "torch\nn\utils\parametrize.py", line 349, in get_cached_parametrization tensor = parametrization() File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\utils\parametrize.py", line 269, in forward x = self0 File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\utils\parametrizations.py", line 299, in forward return torch._weight_norm(weight_v, weight_g, self.dim) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacty of 2.00 GiB of which 31.85 MiB is free. Of the allocated memory 1.05 GiB is allocated by PyTorch, and 105.95 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2024-12-01 16:49:35,801 - app - ERROR - Exception on /tts [POST] Traceback (most recent call last): File "flask\app.py", line 1473, in wsgi_app File "flask\app.py", line 882, in full_dispatch_request File "flask\app.py", line 880, in full_dispatch_request File "flask\app.py", line 865, in dispatch_request File "app.py", line 245, in tts File "ChatTTS\core.py", line 206, in infer File "ChatTTS\core.py", line 343, in _infer File "ChatTTS\core.py", line 566, in _refine_text File "ChatTTS\model\gpt.py", line 421, in generate File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) File "torch\nn\utils\parametrize.py", line 366, in get_parametrized return get_cached_parametrization(parametrization) File "torch\nn\utils\parametrize.py", line 349, in get_cached_parametrization tensor = parametrization() File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\utils\parametrize.py", line 269, in forward x = self0 File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\utils\parametrizations.py", line 299, in forward return torch._weight_norm(weight_v, weight_g, self.dim) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacty of 2.00 GiB of which 23.85 MiB is free. Of the allocated memory 1.06 GiB is allocated by PyTorch, and 95.23 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

surjayc avatar Dec 01 '24 09:12 surjayc