byzer-llm
byzer-llm copied to clipboard
本地部署了Sensevoice模型之后怎么使用
部署代码
byzerllm deploy --pretrained_model_type custom/sensevoice ^
--infer_backend transformers ^
--cpus_per_worker 0.001 ^
--gpus_per_worker 0 ^
--num_workers 1 ^
--model_path D:/Project/PYTHON/william-toolbox/SenseVoiceSmall ^
--infer_params vad_model=fsmn-vad vad_kwargs.max_single_segment_time=30000 ^
--model voice2text
用文档里whisper模型的代码测试了一下:
import byzerllm
import json
import base64
from byzerllm.types import AudioPath
llm = byzerllm.ByzerLLM.from_default_model("voice2text")
audio_file = "D:/Project/PYTHON/william-toolbox/SenseVoiceSmall/example/zh.mp3"
@byzerllm.prompt(llm=llm)
def audio_to_text(audio_file: AudioPath):
"""
{{ audio_file }}
"""
v = audio_to_text(AudioPath(value=audio_file))
json.loads(v)
结果:
2025-04-02 13:54:08,183 INFO worker.py:1660 -- Connecting to existing Ray cluster at address: 127.0.0.1:6379...
2025-04-02 13:54:08,205 INFO worker.py:1843 -- Connected to Ray cluster. View the dashboard at http://127.0.0.1:8265
Traceback (most recent call last):
File "d:\Project\PYTHON\william-toolbox\asr.py", line 16, in <module>
v = audio_to_text(AudioPath(value=audio_file))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\miniconda3\envs\autocoder\Lib\site-packages\byzerllm\__init__.py", line 721, in __call__
return self.prompt_runner(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\miniconda3\envs\autocoder\Lib\site-packages\byzerllm\__init__.py", line 325, in __call__
return self.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\miniconda3\envs\autocoder\Lib\site-packages\byzerllm\__init__.py", line 556, in run
v = llm.prompt(
^^^^^^^^^^^
File "D:\Python\miniconda3\envs\autocoder\Lib\site-packages\byzerllm\utils\client\byzerllm_client.py", line 1584, in wrapper
raise Exception(
Exception: audio_to_text should return a pydantic model or string
正确的操作方法是什么?