GPT-SoVITS icon indicating copy to clipboard operation
GPT-SoVITS copied to clipboard

TTS流式输出,异步报错

Open zifeiyu-tan opened this issue 4 months ago • 1 comments

当使用流式输出的api时,如果此时多个同时调用,就会报错 ,请问如何解决 ERROR: Exception in ASGI application

  • Exception Group Traceback (most recent call last): | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi | result = await app( # type: ignore[func-returns-value] | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in call | return await self.app(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in call | await super().call(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/applications.py", line 113, in call | await self.middleware_stack(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/middleware/errors.py", line 187, in call | raise exc | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/middleware/errors.py", line 165, in call | await self.app(scope, receive, _send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/middleware/cors.py", line 85, in call | await self.app(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in call | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app | raise exc | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app | await app(scope, receive, sender) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/routing.py", line 715, in call | await self.middleware_stack(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/routing.py", line 735, in app | await route.handle(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/routing.py", line 288, in handle | await self.app(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/routing.py", line 76, in app | await wrap_app_handling_exceptions(app, request)(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app | raise exc | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app | await app(scope, receive, sender) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/routing.py", line 74, in app | await response(scope, receive, send) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/responses.py", line 257, in call | await wrap(partial(self.listen_for_disconnect, receive)) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 736, in aexit | raise BaseExceptionGroup( | exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "/home/user/workspace/GPT-SoVITS-Inference/GPT_SoVITS/TTS_infer_pack/TTS.py", line 787, in run | pred_semantic_list, idx_list = self.t2s_model.model.infer_panel( | File "/home/user/workspace/GPT-SoVITS-Inference/GPT_SoVITS/AR/models/t2s_model.py", line 633, in infer_panel_batch_infer_with_flash_attn | if (self.EOS in samples[:, 0]) or
    | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/_tensor.py", line 1112, in contains | return (element == self).any().item() # type: ignore[union-attr] | RuntimeError: CUDA error: device-side assert triggered | CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. | For debugging consider passing CUDA_LAUNCH_BLOCKING=1 | Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. | | | During handling of the above exception, another exception occurred: | | Traceback (most recent call last): | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/responses.py", line 253, in wrap | await func() | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/responses.py", line 242, in stream_response | async for chunk in self.body_iterator: | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/concurrency.py", line 62, in iterate_in_threadpool | yield await anyio.to_thread.run_sync(_next, as_iterator) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync | return await get_async_backend().run_sync_in_worker_thread( | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2405, in run_sync_in_worker_thread | return await future | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 914, in run | result = context.run(func, *args) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/starlette/concurrency.py", line 51, in _next | return next(iterator) | File "/home/user/workspace/GPT-SoVITS-Inference/Synthesizers/gsv_fast/GSV_Synthesizer.py", line 92, in get_streaming_tts_wav | for chunk in chunks: | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 57, in generator_context | response = gen.send(request) | File "/home/user/workspace/GPT-SoVITS-Inference/GPT_SoVITS/TTS_infer_pack/TTS.py", line 887, in run | self.init_t2s_weights(self.configs.t2s_weights_path) | File "/home/user/workspace/GPT-SoVITS-Inference/GPT_SoVITS/TTS_infer_pack/TTS.py", line 296, in init_t2s_weights | dict_s1 = torch.load(weights_path, map_location=self.configs.device) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/serialization.py", line 1097, in load | return _load( | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/serialization.py", line 1525, in _load | result = unpickler.load() | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/serialization.py", line 1492, in persistent_load | typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/serialization.py", line 1466, in load_tensor | wrap_storage=restore_location(storage, location), | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/serialization.py", line 1389, in restore_location | return default_restore_location(storage, map_location) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/serialization.py", line 414, in default_restore_location | result = fn(storage, location) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/serialization.py", line 392, in _deserialize | return obj.to(device=device) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/storage.py", line 187, in to | return _to(self, device, non_blocking) | File "/home/user/anaconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/_utils.py", line 90, in to | untyped_storage.copy(self, non_blocking) | RuntimeError: CUDA error: device-side assert triggered | CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. | For debugging consider passing CUDA_LAUNCH_BLOCKING=1 | Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. | +------------------------------------

zifeiyu-tan avatar Sep 29 '24 09:09 zifeiyu-tan