FunASR
FunASR copied to clipboard
A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
#### What is your question? I used the python method in the Funasr documentation for exporting ONNX models to try to export the ONNX of the pretrained model paraex-en-Streaming, but...
Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节) ## ❓ Questions and Help ### Before asking: 1. search the issues. 2. search the...
识别英文时会出现例如: in the mediacontent real time data from brainwaves facialexpressions captured 之类的有单词粘连的问题,请问这个应该怎么解决呢?
# 环境描述 ```bash 系统:Ubuntu18.04.6LTS ``` # 描述问题 我使用Funasr文档中导出ONNX模型的python方法尝试导出paraformer-zh-streaming这个预训练模型的ONNX,但一直出现错误! ```bash (funasr_env) lipeng@lipeng:~/share/modules$ vim export_ONNX_1.py (funasr_env) lipeng@lipeng:~/share/modules$ cat export_ONNX_1.py from funasr import AutoModel model = AutoModel(model="paraformer-zh-streaming", device="cpu") res = model.export(quantize=False) (funasr_env) lipeng@lipeng:~/share/modules$...
Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节) ## 🐛 Bug 使用docker拉取registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-gpu-0.1.1官方镜像 ` sudo docker pull \ registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-gpu-0.1.1 mkdir -p ./funasr-runtime-resources/models sudo docker...
Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节) ## ❓ Questions and Help SenseVoice 是否支持实时的ASR? ### Before asking: 1. search the issues. 2....
Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节) ## ❓ Questions and Help How does Funasr export ONNX for pre-trained models in Ubuntu18.04.6LTS?...
from funasr_onnx import Fsmn_vad from pathlib import Path model_dir = "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch" wav_path = '{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav'.format(Path.home()) model = Fsmn_vad(model_dir) result = model(wav_path) print(result)和 from funasr_onnx import Paraformer from pathlib import Path model_dir...
from pathlib import Path from funasr import AutoModel model = AutoModel( model=r"iic/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online-onnx" ) res = model.generate(input="{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-pytorch/example/vad_example.wav".format(Path.home()), ) print(res)为什么执行报错 AssertionError: iic/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online-onnx is not registered, 换成不是.onnx的就不报错,
Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节) ## ❓ Questions and Help ### Before asking: 1. search the issues. 2. search the...