Duix.Heygem icon indicating copy to clipboard operation
Duix.Heygem copied to clipboard

Error: Error invoking remote method 'model/addModel': Error: connect ECONNREFUSE D 127.0.0.1:18180

Open liyz22 opened this issue 9 months ago • 6 comments

Error: Error invoking remote method 'model/addModel': Error: connect ECONNREFUSE D 127.0.0.1:18180

liyz22 avatar Mar 07 '25 08:03 liyz22

what‘s wrong? thanks

liyz22 avatar Mar 07 '25 08:03 liyz22

Maybe you didn't install Docker, or you didn't install all three of our services. It's also possible that the heygem-tts service didn't start successfully.

whl88 avatar Mar 07 '25 10:03 whl88

我也遇到这个问题,并且服务都是在运行的,如下图 Image

Image

hz0571 avatar Mar 09 '25 03:03 hz0571

As shown in the figure above, the heygem-tts service has not started successfully.

whl88 avatar Apr 08 '25 06:04 whl88

Image Image I am a beginner, and I also encountered this issue. Docker is running normally, but this error still occurs, and localhost:18180 cannot be opened in the browser either. Is it because of the port mapping issue?

Patrixkw avatar Apr 25 '25 16:04 Patrixkw

Image Image I am a beginner, and I also encountered this issue. Docker is running normally, but this error still occurs, and localhost:18180 cannot be opened in the browser either. Is it because of the port mapping issue?

full log as follow:

2025-04-26 00:32:29.370 | ========== 2025-04-26 00:32:29.370 | == CUDA == 2025-04-26 00:32:29.371 | ========== 2025-04-26 00:32:29.386 | 2025-04-26 00:32:29.386 | CUDA Version 12.1.1 2025-04-26 00:32:29.388 | 2025-04-26 00:32:29.388 | Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. 2025-04-26 00:32:29.390 | 2025-04-26 00:32:29.390 | This container image and its contents are governed by the NVIDIA Deep Learning Container License. 2025-04-26 00:32:29.390 | By pulling and using the container, you accept the terms and conditions of this license: 2025-04-26 00:32:29.390 | https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license 2025-04-26 00:32:29.390 | 2025-04-26 00:32:29.390 | A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience. 2025-04-26 00:32:29.404 | 2025-04-26 00:32:39.473 | taskset: bad usage 2025-04-26 00:32:39.473 | Try 'taskset --help' for more information. 2025-04-26 00:32:39.479 | INFO:gjtts_server:加载自定义 姓名多音字 [tools/text_norm/front_end/utils/name_polyphone.json] 2025-04-26 00:32:40.808 | INFO: Started server process [1] 2025-04-26 00:32:40.809 | INFO: Waiting for application startup. 2025-04-26 00:32:40.809 | DEBUG:gjtts_server:语言类型 CN_EN 2025-04-26 00:32:40.814 | DEBUG:gjtts_server:加载自定义 单位 [/code/tools/text_norm/front_end/normalize/config/units.json] 2025-04-26 00:32:40.814 | DEBUG:gjtts_server:加载自定义 单位 [/code/tools/text_norm/front_end/normalize/config/units.json] 2025-04-26 00:32:40.814 | DEBUG:gjtts_server:加载自定义 单位 [/code/tools/text_norm/front_end/normalize/config/units.json] 2025-04-26 00:33:04.246 | 2025-04-25 16:33:04.246 | INFO | tools.llama.generate:load_model:682 - Restored model from checkpoint 2025-04-26 00:33:04.247 | 2025-04-25 16:33:04.247 | INFO | tools.llama.generate:load_model:688 - Using DualARTransformer 2025-04-26 00:33:04.272 | 2025-04-25 16:33:04.272 | INFO | tools.server.model_manager:load_llama_model:102 - LLAMA model loaded. 2025-04-26 00:33:06.326 | 2025-04-25 16:33:06.326 | INFO | tools.vqgan.inference:load_model:43 - Loaded model: <All keys matched successfully> 2025-04-26 00:33:06.327 | 2025-04-25 16:33:06.327 | INFO | tools.server.model_manager:load_decoder_model:110 - Decoder model loaded. 2025-04-26 00:33:06.344 | 2025-04-25 16:33:06.344 | INFO | tools.llama.generate:generate_long:789 - Encoded text: Hello world. 2025-04-26 00:33:06.344 | 2025-04-25 16:33:06.344 | INFO | tools.llama.generate:generate_long:807 - Generating sentence 1/1 of sample 1/1 2025-04-26 00:33:06.951 | 环境变量LOGGER_FILE_NAME不存在,使用自定义名称:fish 2025-04-26 00:33:06.951 | 日志完整目录:/code/log/fish.log /code/log/fish_err.log 2025-04-26 00:33:06.951 | None Hello world. 2025-04-26 00:33:06.951 | Hello world. 2025-04-26 00:33:09.057 | 2025-04-26 00:33:09.058 | 0%| | 0/1023 [00:00<?, ?it/s] 2025-04-26 00:33:09.058 | 0%| | 2/1023 [00:00<01:27, 11.68it/s] 2025-04-26 00:33:09.058 | 0%| | 4/1023 [00:00<01:19, 12.75it/s] 2025-04-26 00:33:09.058 | 1%| | 6/1023 [00:00<01:19, 12.85it/s] 2025-04-26 00:33:09.058 | 1%| | 8/1023 [00:00<01:21, 12.49it/s] 2025-04-26 00:33:09.058 | 1%| | 10/1023 [00:00<01:18, 12.87it/s] 2025-04-26 00:33:09.058 | 1%| | 12/1023 [00:00<01:13, 13.71it/s] 2025-04-26 00:33:09.058 | 1%|▏ | 14/1023 [00:01<01:10, 14.27it/s] 2025-04-26 00:33:09.058 | 2%|▏ | 16/1023 [00:01<01:09, 14.52it/s] 2025-04-26 00:33:09.058 | 2%|▏ | 18/1023 [00:01<01:09, 14.52it/s] 2025-04-26 00:33:09.058 | 2%|▏ | 20/1023 [00:01<01:08, 14.70it/s] 2025-04-26 00:33:09.058 | 2%|▏ | 22/1023 [00:01<01:06, 14.95it/s] 2025-04-26 00:33:09.058 | 2%|▏ | 24/1023 [00:01<01:06, 14.91it/s] 2025-04-26 00:33:09.058 | 3%|▎ | 26/1023 [00:01<01:06, 14.96it/s] 2025-04-26 00:33:09.058 | 3%|▎ | 28/1023 [00:01<01:05, 15.14it/s] 2025-04-26 00:33:09.058 | 3%|▎ | 29/1023 [00:02<01:12, 13.78it/s] 2025-04-26 00:33:09.057 | 2025-04-25 16:33:09.057 | INFO | tools.llama.generate:generate_long:861 - Generated 31 tokens in 2.71 seconds, 11.45 tokens/sec 2025-04-26 00:33:09.057 | 2025-04-25 16:33:09.057 | INFO | tools.llama.generate:generate_long:864 - Bandwidth achieved: 7.30 GB/s 2025-04-26 00:33:09.058 | 2025-04-25 16:33:09.058 | INFO | tools.llama.generate:generate_long:869 - GPU Memory used: 1.75 GB 2025-04-26 00:33:09.085 | 2025-04-25 16:33:09.084 | INFO | tools.inference_engine.vq_manager:decode_vq_tokens:20 - VQ features: torch.Size([8, 30]) 2025-04-26 00:33:09.891 | 2025-04-25 16:33:09.890 | INFO | tools.server.model_manager:warm_up:125 - Models warmed up. 2025-04-26 00:33:09.891 | 2025-04-25 16:33:09.891 | INFO | main:initialize_app:88 - Startup done, listening server at http://0.0.0.0:8080 2025-04-26 00:33:09.892 | INFO: Application startup complete. 2025-04-26 00:33:09.894 | INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)

Patrixkw avatar Apr 25 '25 16:04 Patrixkw