FunASR
FunASR copied to clipboard
forward() is missing value for argument 'bias_embed'
启动命令:
nohup bash run_server.sh
--vad-dir /workspace/models/speech_fsmn_vad_zh-cn-16k-common-onnx
--model-dir /workspace/models/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404
--punc-dir /workspace/models/punc_ct-transformer_cn-en-common-vocab471067-large-onnx
--lm-dir /workspace/models/speech_ngram_lm_zh-cn-ai-wesp-fst
--itn-dir /workspace/models/fst_itn_zh
--certfile 0
--decoder-thread-num 20
--io-thread-num 10
--model-thread-num 16
--hotword /workspace/models/hotwords/hotwords.txt > log.txt 2>&1 &
描述: 当 --model-dir /workspace/models/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch 时,可以正常使用,当改成热词模型后就会反馈 “forward() is missing value for argument 'bias_embed'”这个错误,我尝试了将热词模型改成speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404-onnx但是不行。
环境: 显卡为:GeForce RTX 4090 Cuda Version:12.4 容器中cuda版本为:11.3 容器中torch版本为:1.12.0+cu113
报错为:
I20250731 09:43:24.747292 1675 funasr-wss-server.cpp:437] Set punc-dir : /workspace/models/punc_ct-transformer_cn-en-common-vocab471067-large-onnx
I20250731 09:43:24.747342 1675 funasr-wss-server.cpp:466] hotword path: /workspace/models/hotwords/hotwords.txt
I20250731 09:43:24.747411 1675 util.cpp:1013] hotwords:
I20250731 09:43:24.747473 1675 util.cpp:1042] 魔搭 : 20
I20250731 09:43:24.748739 1675 funasr-wss-server.cpp:497] SSL is closed!
I20250731 09:43:24.772768 1675 fsmn-vad.cpp:58] Successfully load model from /workspace/models/speech_fsmn_vad_zh-cn-16k-common-onnx/model_quant.onnx
I20250731 09:43:24.796345 1675 paraformer-torch.cpp:41] CUDA is available, running on GPU
I20250731 09:43:28.708343 1675 paraformer-torch.cpp:52] Successfully load model from /workspace/models/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/model_blade.torchscript
E20250731 09:43:28.715540 1675 paraformer-torch.cpp:518] forward() is missing value for argument 'bias_embed'. Declaration: forward(torch.funasr.models.contextual_paraformer.model.ContextualParaformer self, Tensor speech, Tensor speech_lengths, Tensor bias_embed) -> ((Tensor, Tensor))
Exception raised from checkAndNormalizeInputs at ../aten/src/ATen/core/function_schema_inl.h:369 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x3e (0x7f33f612d1ee in /usr/local/lib/python3.8/dist-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x5c (0x7f33f61085e8 in /usr/local/lib/python3.8/dist-packages/torch/lib/libc10.so)
frame #2: