Duix.Heygem icon indicating copy to clipboard operation
Duix.Heygem copied to clipboard

报错Error: Error invoking remote method 'model/addModel': Error: connect ECONNREFUSED 127

Open rennifa opened this issue 8 months ago • 5 comments

生成数字人那里,报错,生成不了

rennifa avatar Apr 12 '25 11:04 rennifa

这个该怎么解决啊 重新下载还是这样

weipeng12345 avatar Apr 12 '25 12:04 weipeng12345

More logs are needed. Please post the logs of the heygem - tts service. https://github.com/GuijiAI/HeyGem.ai/issues/381

whl88 avatar Apr 14 '25 08:04 whl88

More logs are needed. Please post the logs of the heygem - tts service. #381

bro, please help me. the heygem-tts logs

2025-04-19 14:53:31 
2025-04-19 14:53:31 ==========
2025-04-19 14:53:31 == CUDA ==
2025-04-19 14:53:31 ==========
2025-04-19 14:53:31 
2025-04-19 14:53:31 CUDA Version 12.1.1
2025-04-19 14:53:31 
2025-04-19 14:53:31 Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2025-04-19 14:53:31 
2025-04-19 14:53:31 This container image and its contents are governed by the NVIDIA Deep Learning Container License.
2025-04-19 14:53:31 By pulling and using the container, you accept the terms and conditions of this license:
2025-04-19 14:53:31 https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
2025-04-19 14:53:31 
2025-04-19 14:53:31 A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
2025-04-19 14:53:31 
2025-04-19 14:53:41 taskset: bad usage
2025-04-19 14:53:41 Try 'taskset --help' for more information.
2025-04-19 14:53:41 INFO:gjtts_server:加载自定义 姓名多音字 [tools/text_norm/front_end/utils/name_polyphone.json]
2025-04-19 14:53:42 INFO:     Started server process [1]
2025-04-19 14:53:42 INFO:     Waiting for application startup.
2025-04-19 14:53:42 DEBUG:gjtts_server:语言类型 CN_EN
2025-04-19 14:53:42 DEBUG:gjtts_server:加载自定义 单位 [/code/tools/text_norm/front_end/normalize/config/units.json]
2025-04-19 14:53:42 DEBUG:gjtts_server:加载自定义 单位 [/code/tools/text_norm/front_end/normalize/config/units.json]
2025-04-19 14:53:42 DEBUG:gjtts_server:加载自定义 单位 [/code/tools/text_norm/front_end/normalize/config/units.json]
2025-04-19 14:53:52 2025-04-19 06:53:52.881 | INFO     | tools.llama.generate:load_model:682 - Restored model from checkpoint
2025-04-19 14:53:52 2025-04-19 06:53:52.881 | INFO     | tools.llama.generate:load_model:688 - Using DualARTransformer
2025-04-19 14:53:52 Exception in thread Thread-2 (worker):
2025-04-19 14:53:52 Traceback (most recent call last):
2025-04-19 14:53:52   File "/opt/conda/envs/python310/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
2025-04-19 14:53:52     self.run()
2025-04-19 14:53:52   File "/opt/conda/envs/python310/lib/python3.10/threading.py", line 953, in run
2025-04-19 14:53:52     self._target(*self._args, **self._kwargs)
2025-04-19 14:53:52   File "/code/tools/llama/generate.py", line 916, in worker
2025-04-19 14:53:52     model.setup_caches(
2025-04-19 14:53:52   File "/code/fish_speech/models/text2semantic/llama.py", line 575, in setup_caches
2025-04-19 14:53:52     super().setup_caches(max_batch_size, max_seq_len, dtype)
2025-04-19 14:53:52   File "/code/fish_speech/models/text2semantic/llama.py", line 241, in setup_caches
2025-04-19 14:53:52     b.attention.kv_cache = KVCache(
2025-04-19 14:53:52   File "/code/fish_speech/models/text2semantic/llama.py", line 139, in __init__
2025-04-19 14:53:52     self.register_buffer("k_cache", torch.zeros(cache_shape, dtype=dtype))
2025-04-19 14:53:52   File "/opt/conda/envs/python310/lib/python3.10/site-packages/torch/utils/_device.py", line 78, in __torch_function__
2025-04-19 14:53:52     return func(*args, **kwargs)
2025-04-19 14:53:52 RuntimeError: CUDA error: no kernel image is available for execution on the device
2025-04-19 14:53:52 CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
2025-04-19 14:53:52 For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
2025-04-19 14:53:52 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

jeadyx avatar Apr 19 '25 07:04 jeadyx

@jeadyx Are you using a 50 - series graphics card? Please execute nvidia - smi in the command line and paste the output display. for example:

C:\Users\admin>nvidia-smi
Wed Apr 23 16:30:26 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.94                 Driver Version: 560.94         CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090      WDDM  |   00000000:01:00.0 Off |                  N/A |
|  0%   33C    P8             10W /  350W |    7263MiB /  24576MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+

whl88 avatar Apr 23 '25 08:04 whl88

==========

== CUDA ==

==========

CUDA Version 12.1.1

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.

By pulling and using the container, you accept the terms and conditions of this license:

https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license⁠

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

taskset: bad usage

Try 'taskset --help' for more information.

INFO:gjtts_server:加载自定义 姓名多音字 [tools/text_norm/front_end/utils/name_polyphone.json]

INFO: Started server process [1]

INFO: Waiting for application startup.

DEBUG:gjtts_server:语言类型 CN_EN

DEBUG:gjtts_server:加载自定义 单位 [/code/tools/text_norm/front_end/normalize/config/units.json]

DEBUG:gjtts_server:加载自定义 单位 [/code/tools/text_norm/front_end/normalize/config/units.json]

DEBUG:gjtts_server:加载自定义 单位 [/code/tools/text_norm/front_end/normalize/config/units.json]

2025-05-14 16:55:24.256 | INFO | tools.llama.generate:load_model:682 - Restored model from checkpoint

2025-05-14 16:55:24.257 | INFO | tools.llama.generate:load_model:688 - Using DualARTransformer

2025-05-14 16:55:24.282 | INFO | tools.server.model_manager:load_llama_model:102 - LLAMA model loaded.

2025-05-14 16:55:25.667 | INFO | tools.vqgan.inference:load_model:43 - Loaded model: <All keys matched successfully>

2025-05-14 16:55:25.668 | INFO | tools.server.model_manager:load_decoder_model:110 - Decoder model loaded.

Patrixkw avatar May 14 '25 17:05 Patrixkw