Open-LLM-VTuber icon indicating copy to clipboard operation
Open-LLM-VTuber copied to clipboard

Can not start Sherpa-Onnx-ASR: Using cuda for inference

Open 3bagorion33 opened this issue 7 months ago • 9 comments

1. Checklist / 检查项

  • [x] I have removed sensitive information from configuration/logs.

    我已移除配置或日志中的敏感信息。

  • [x] I have checked the FAQ and existing issues.

    我已查阅常见问题已有 issue

  • [x] I am using the latest version of the project.

    我正在使用项目的最新版本。


2. Environment Details / 环境信息

  • How did you install Open-LLM-VTuber:

    你是如何安装 Open-LLM-VTuber 的:

    • [x] git clone (源码克隆)
    • [ ] release zip (发布包)
    • [ ] exe (Windows) (Windows 安装包)
    • [ ] dmg (MacOS) (MacOS 安装包)
  • Are you running the backend and frontend on the same device?

    后端和前端是否在同一台设备上运行?

  • If you used GPU, please provide your GPU model and driver version:

    如果你使用了 GPU,请提供你的 GPU 型号及驱动版本信息:

  • Browser (if applicable):

     浏览器(如果适用):
    

3. Describe the bug / 问题描述

I do all that described here https://docs.llmvtuber.com/en/docs/user-guide/backend/asr

C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> uv add onnxruntime-gpu sherpa-onnx==1.10.39+cuda -f https://k2-fsa.github.io/sherpa/onnx/cuda.html
Resolved 276 packages in 2.40s
Installed 1 package in 106ms
 + sherpa-onnx==1.10.39+cuda
C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> uv run run_server.py
2025-05-23 19:57:05.895 | INFO     | __main__:<module>:86 - Running in standard mode. For detailed debug logs, use: uv run run_server.py --verbose
2025-05-23 19:57:05 | INFO     | __main__:run:57 | Open-LLM-VTuber, version v1.1.3
[WARNING] User config contains the following keys not present in default config: character_config.asr_config.sherpa_onnx_asr.encoder, character_config.asr_config.sherpa_onnx_asr.decoder, character_config.asr_config.sherpa_onnx_asr.joiner, character_config.asr_config.sherpa_onnx_asr.nemo_ctc, character_config.asr_config.sherpa_onnx_asr.whisper_encoder, character_config.asr_config.sherpa_onnx_asr.whisper_decoder
2025-05-23 19:57:05 | INFO     | upgrade:sync_user_config:350 | [DEBUG] User configuration is up-to-date.
2025-05-23 19:57:05 | INFO     | src.open_llm_vtuber.service_context:init_live2d:156 | Initializing Live2D: shizuku-local
2025-05-23 19:57:05 | INFO     | src.open_llm_vtuber.live2d_model:_lookup_model_info:142 | Model Information Loaded.
2025-05-23 19:57:05 | INFO     | src.open_llm_vtuber.service_context:init_asr:166 | Initializing ASR: sherpa_onnx_asr
2025-05-23 19:57:06 | INFO     | src.open_llm_vtuber.asr.sherpa_onnx_asr:__init__:81 | Sherpa-Onnx-ASR: Using cuda for inference
2025-05-23 19:57:11 | ERROR    | __main__:<module>:91 | An error has been caught in function '<module>', process 'MainProcess' (30108), thread 'MainThread' (16364):
Traceback (most recent call last):

> File "C:\Users\a15\Open-LLM-VTuber\run_server.py", line 91, in <module>
    run(console_log_level=console_log_level)
    │                     └ 'INFO'
    └ <function run at 0x00000298DB1581F0>

  File "C:\Users\a15\Open-LLM-VTuber\run_server.py", line 71, in run
    server = WebSocketServer(config=config)
             │                      └ Config(system_config=SystemConfig(conf_version='v1.1.1', host='localhost', port=12393, config_alts_dir='characters', tool_pro...
             └ <class 'src.open_llm_vtuber.server.WebSocketServer'>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\server.py", line 45, in __init__
    default_context_cache.load_from_config(config)
    │                     │                └ Config(system_config=SystemConfig(conf_version='v1.1.1', host='localhost', port=12393, config_alts_dir='characters', tool_pro...
    │                     └ <function ServiceContext.load_from_config at 0x00000298DB0D3880>
    └ <src.open_llm_vtuber.service_context.ServiceContext object at 0x00000298DB184100>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\service_context.py", line 132, in load_from_config
    self.init_asr(config.character_config.asr_config)
    │    │        │      │                └ ASRConfig(asr_model='sherpa_onnx_asr', azure_asr=AzureASRConfig(api_key='azure_api_key', region='eastus', languages=['en-US',...
    │    │        │      └ CharacterConfig(conf_name='shizuku-local', conf_uid='shizuku-local-001', live2d_model_name='shizuku-local', character_name='S...
    │    │        └ Config(system_config=SystemConfig(conf_version='v1.1.1', host='localhost', port=12393, config_alts_dir='characters', tool_pro...
    │    └ <function ServiceContext.init_asr at 0x00000298DB0D39A0>
    └ <src.open_llm_vtuber.service_context.ServiceContext object at 0x00000298DB184100>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\service_context.py", line 167, in init_asr
    self.asr_engine = ASRFactory.get_asr_system(
    │    │            │          └ <staticmethod(<function ASRFactory.get_asr_system at 0x00000298DA11CF70>)>
    │    │            └ <class 'src.open_llm_vtuber.asr.asr_factory.ASRFactory'>
    │    └ None
    └ <src.open_llm_vtuber.service_context.ServiceContext object at 0x00000298DB184100>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\asr\asr_factory.py", line 58, in get_asr_system
    return SherpaOnnxASR(**kwargs)
           │               └ {'model_type': 'nemo_ctc', 'encoder': './models/sherpa-onnx-nemo-fast-conformer-transducer-be-de-en-es-fr-hr-it-pl-ru-uk-20k/...
           └ <class 'src.open_llm_vtuber.asr.sherpa_onnx_asr.VoiceRecognition'>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\asr\sherpa_onnx_asr.py", line 83, in __init__
    self.recognizer = self._create_recognizer()
    │                 │    └ <function VoiceRecognition._create_recognizer at 0x00000298DB29A170>
    │                 └ <src.open_llm_vtuber.asr.sherpa_onnx_asr.VoiceRecognition object at 0x00000298DB185180>
    └ <src.open_llm_vtuber.asr.sherpa_onnx_asr.VoiceRecognition object at 0x00000298DB185180>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\asr\sherpa_onnx_asr.py", line 116, in _create_recognizer
    recognizer = sherpa_onnx.OfflineRecognizer.from_nemo_ctc(
                 │           │                 └ <classmethod(<function OfflineRecognizer.from_nemo_ctc at 0x00000298DB1596C0>)>
                 │           └ <class 'sherpa_onnx.offline_recognizer.OfflineRecognizer'>
                 └ <module 'sherpa_onnx' from 'C:\\Users\\a15\\Open-LLM-VTuber\\.venv\\lib\\site-packages\\sherpa_onnx\\__init__.py'>

  File "C:\Users\a15\Open-LLM-VTuber\.venv\lib\site-packages\sherpa_onnx\offline_recognizer.py", line 480, in from_nemo_ctc
    self.recognizer = _Recognizer(recognizer_config)
    │                 │           └ <_sherpa_onnx.OfflineRecognizerConfig object at 0x00000298DB271230>
    │                 └ <class '_sherpa_onnx.OfflineRecognizer'>
    └ <sherpa_onnx.offline_recognizer.OfflineRecognizer object at 0x00000298DB184190>

RuntimeError: D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\a15\Open-LLM-VTuber\.venv\lib\site-packages\onnxruntime_providers_cuda.dll"
C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jun_13_19:42:34_Pacific_Daylight_Time_2023
Cuda compilation tools, release 12.2, V12.2.91
Build cuda_12.2.r12.2/compiler.32965470_0

C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> ls "C:\Users\a15\Open-LLM-VTuber\.venv\lib\site-packages\onnxruntime_providers_cuda.dll"

    Directory: C:\Users\a15\Open-LLM-VTuber\.venv\lib\site-packages

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a---          21.05.2025    19:13      370362400 onnxruntime_providers_cuda.dll

Image

请详细描述发生了什么、你希望看到什么,以及如何复现。


4. Screenshots / Logs (if relevant)

截图 / 日志(如有)

  • Backend log: 后端日志
  • Frontend setting (General): 前端设置(通用)
  • Frontend console log (F12): 前端控制台日志(F12)
  • If using Ollama: output of ollama ps: 如果使用 Ollama,请附上 ollama ps 的输出

5. Configuration / 配置文件

Please provide relevant config files, with sensitive info like API keys removed

请提供相关配置文件(请务必去除 API key 等敏感信息)

  • conf.yaml
  • model_dict.json, .model3.json

3bagorion33 avatar May 23 '25 14:05 3bagorion33

Due to an oversight during the previous PR merge, we forgot to update the English documentation. We apologize for this.

We have now updated the latest English documentation. Please refresh the documentation and refer to the instructions in the latest English version. If the changes do not appear, consider clearing your cache.

ylxmf2005 avatar May 26 '25 05:05 ylxmf2005

Thank you very much for your reply. I have followed the updated instructions, having previously uninstalled the old versions of the packages, but I still get the error.

C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> uv remove sherpa-onnx onnxruntime
error: The dependency `onnxruntime` could not be found in `project.dependencies`
C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> uv remove faster-whisper
error: The dependency `faster-whisper` could not be found in `project.dependencies`
C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> uv remove sherpa-onnx==1.10.39+cuda
Resolved 275 packages in 4.18s
Uninstalled 1 package in 32ms
 - sherpa-onnx==1.10.39+cuda
C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> uv remove onnxruntime-gpu sherpa-onnx==1.10.39+cuda
error: The dependency `sherpa-onnx` could not be found in `project.dependencies`
C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> uv remove onnxruntime-gpu
Resolved 274 packages in 351ms
Uninstalled 1 package in 136ms
 - onnxruntime-gpu==1.22.0
C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> uv add onnxruntime-gpu==1.17.1 sherpa-onnx==1.10.39+cuda -f https://k2-fsa.github.io/sherpa/onnx/cuda.html
Resolved 276 packages in 2.31s
Prepared 1 package in 49.64s
Installed 2 packages in 140ms
 + onnxruntime-gpu==1.17.1
 + sherpa-onnx==1.10.39+cuda
C:\Users\a15\Open-LLM-VTuber [main ≡ +0 ~2 -0 !]> uv run run_server.py
2025-05-26 15:32:22.068 | INFO     | __main__:<module>:86 - Running in standard mode. For detailed debug logs, use: uv run run_server.py --verbose
2025-05-26 15:32:22 | INFO     | __main__:run:57 | Open-LLM-VTuber, version v1.1.3
[WARNING] User config contains the following keys not present in default config: character_config.asr_config.sherpa_onnx_asr.encoder, character_config.asr_config.sherpa_onnx_asr.decoder, character_config.asr_config.sherpa_onnx_asr.joiner, character_config.asr_config.sherpa_onnx_asr.nemo_ctc, character_config.asr_config.sherpa_onnx_asr.whisper_encoder, character_config.asr_config.sherpa_onnx_asr.whisper_decoder
2025-05-26 15:32:22 | INFO     | upgrade:sync_user_config:350 | [DEBUG] User configuration is up-to-date.
2025-05-26 15:32:22 | INFO     | src.open_llm_vtuber.service_context:init_live2d:156 | Initializing Live2D: shizuku-local
2025-05-26 15:32:22 | INFO     | src.open_llm_vtuber.live2d_model:_lookup_model_info:142 | Model Information Loaded.
2025-05-26 15:32:22 | INFO     | src.open_llm_vtuber.service_context:init_asr:166 | Initializing ASR: sherpa_onnx_asr
2025-05-26 15:32:22 | INFO     | src.open_llm_vtuber.asr.sherpa_onnx_asr:__init__:81 | Sherpa-Onnx-ASR: Using cuda for inference
2025-05-26 15:32:27 | ERROR    | __main__:<module>:91 | An error has been caught in function '<module>', process 'MainProcess' (8928), thread 'MainThread' (13400):
Traceback (most recent call last):

> File "C:\Users\a15\Open-LLM-VTuber\run_server.py", line 91, in <module>
    run(console_log_level=console_log_level)
    │                     └ 'INFO'
    └ <function run at 0x00000158DF4581F0>

  File "C:\Users\a15\Open-LLM-VTuber\run_server.py", line 71, in run
    server = WebSocketServer(config=config)
             │                      └ Config(system_config=SystemConfig(conf_version='v1.1.1', host='localhost', port=12393, config_alts_dir='characters', tool_pro...
             └ <class 'src.open_llm_vtuber.server.WebSocketServer'>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\server.py", line 45, in __init__
    default_context_cache.load_from_config(config)
    │                     │                └ Config(system_config=SystemConfig(conf_version='v1.1.1', host='localhost', port=12393, config_alts_dir='characters', tool_pro...
    │                     └ <function ServiceContext.load_from_config at 0x00000158DF3D3880>
    └ <src.open_llm_vtuber.service_context.ServiceContext object at 0x00000158DF484100>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\service_context.py", line 132, in load_from_config
    self.init_asr(config.character_config.asr_config)
    │    │        │      │                └ ASRConfig(asr_model='sherpa_onnx_asr', azure_asr=AzureASRConfig(api_key='azure_api_key', region='eastus', languages=['en-US',...
    │    │        │      └ CharacterConfig(conf_name='shizuku-local', conf_uid='shizuku-local-001', live2d_model_name='shizuku-local', character_name='S...
    │    │        └ Config(system_config=SystemConfig(conf_version='v1.1.1', host='localhost', port=12393, config_alts_dir='characters', tool_pro...
    │    └ <function ServiceContext.init_asr at 0x00000158DF3D39A0>
    └ <src.open_llm_vtuber.service_context.ServiceContext object at 0x00000158DF484100>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\service_context.py", line 167, in init_asr
    self.asr_engine = ASRFactory.get_asr_system(
    │    │            │          └ <staticmethod(<function ASRFactory.get_asr_system at 0x00000158DE41CF70>)>
    │    │            └ <class 'src.open_llm_vtuber.asr.asr_factory.ASRFactory'>
    │    └ None
    └ <src.open_llm_vtuber.service_context.ServiceContext object at 0x00000158DF484100>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\asr\asr_factory.py", line 58, in get_asr_system
    return SherpaOnnxASR(**kwargs)
           │               └ {'model_type': 'nemo_ctc', 'encoder': './models/sherpa-onnx-nemo-fast-conformer-transducer-be-de-en-es-fr-hr-it-pl-ru-uk-20k/...
           └ <class 'src.open_llm_vtuber.asr.sherpa_onnx_asr.VoiceRecognition'>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\asr\sherpa_onnx_asr.py", line 83, in __init__
    self.recognizer = self._create_recognizer()
    │                 │    └ <function VoiceRecognition._create_recognizer at 0x00000158DF555AB0>
    │                 └ <src.open_llm_vtuber.asr.sherpa_onnx_asr.VoiceRecognition object at 0x00000158DF485180>
    └ <src.open_llm_vtuber.asr.sherpa_onnx_asr.VoiceRecognition object at 0x00000158DF485180>

  File "C:\Users\a15\Open-LLM-VTuber\src\open_llm_vtuber\asr\sherpa_onnx_asr.py", line 116, in _create_recognizer
    recognizer = sherpa_onnx.OfflineRecognizer.from_nemo_ctc(
                 │           │                 └ <classmethod(<function OfflineRecognizer.from_nemo_ctc at 0x00000158DF4597E0>)>
                 │           └ <class 'sherpa_onnx.offline_recognizer.OfflineRecognizer'>
                 └ <module 'sherpa_onnx' from 'C:\\Users\\a15\\Open-LLM-VTuber\\.venv\\lib\\site-packages\\sherpa_onnx\\__init__.py'>

  File "C:\Users\a15\Open-LLM-VTuber\.venv\lib\site-packages\sherpa_onnx\offline_recognizer.py", line 480, in from_nemo_ctc
    self.recognizer = _Recognizer(recognizer_config)
    │                 │           └ <_sherpa_onnx.OfflineRecognizerConfig object at 0x00000158DF5699B0>
    │                 └ <class '_sherpa_onnx.OfflineRecognizer'>
    └ <sherpa_onnx.offline_recognizer.OfflineRecognizer object at 0x00000158DF484190>

RuntimeError: D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\a15\Open-LLM-VTuber\.venv\lib\site-packages\onnxruntime_providers_cuda.dll"

The screenshot below shows that there are now no dependency issues. Image

3bagorion33 avatar May 26 '25 10:05 3bagorion33

That looks quite strange. Could you use https://learn.microsoft.com/en-us/sysinternals/downloads/procmon to investigate what's happening with onnxruntime_providers_cuda.dll?

ylxmf2005 avatar May 26 '25 13:05 ylxmf2005

You can also open a new issue at https://github.com/k2-fsa/sherpa-onnx/issues and mention this one.

ylxmf2005 avatar May 26 '25 13:05 ylxmf2005

Are you using 32-bit windows?

https://github.com/microsoft/onnxruntime/releases/tag/v1.17.1 provides only gpu-enabled lib for 64-bit windows.

csukuangfj avatar May 27 '25 04:05 csukuangfj

Are you using 32-bit windows?

https://github.com/microsoft/onnxruntime/releases/tag/v1.17.1 provides only gpu-enabled lib for 64-bit windows.

He's using x64 Dependencies, so it’s not 32-bit Windows.

ylxmf2005 avatar May 27 '25 04:05 ylxmf2005

How do you get your onnxruntime_providers_cuda.dll?

csukuangfj avatar May 27 '25 07:05 csukuangfj

How do you get your onnxruntime_providers_cuda.dll?

Since it is in the .venv directory, I can assume that it appears there during dependency installation via the uv command. There is a link to the instructions above.

3bagorion33 avatar May 27 '25 08:05 3bagorion33

Could you use https://learn.microsoft.com/en-us/sysinternals/downloads/procmon to investigate what's happening with onnxruntime_providers_cuda.dll?

Please clarify exactly what I'm supposed to do there.

3bagorion33 avatar May 27 '25 08:05 3bagorion33