执行bash inference_fireredasr_llm.sh报safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge的错误
speech2text.py --asr_type llm --model_dir /root/share/FireRedASR/examples/pretrained_models/FireRedASR-LLM-L --batch_size 1 --beam_size 3 --decode_max_len 0 --decode_min_len 0 --repetition_penalty 3.0 --llm_length_penalty 1.0 --temperature 1.0 --wav_scp wav/wav.scp --output out/llm-l-asr.txt
Namespace(asr_type='llm', model_dir='/root/share/FireRedASR/examples/pretrained_models/FireRedASR-LLM-L', wav_path=None, wav_paths=None, wav_dir=None, wav_scp='wav/wav.scp', output='out/llm-l-asr.txt', use_gpu=1, batch_size=1, beam_size=3, decode_max_len=0, nbest=1, softmax_smoothing=1.0, aed_length_penalty=0.0, eos_penalty=1.0, decode_min_len=0, repetition_penalty=3.0, llm_length_penalty=1.0, temperature=1.0)
#wavs=4
model_path /root/share/FireRedASR/examples/pretrained_models/FireRedASR-LLM-L/model.pth.tar
encoder_path /root/share/FireRedASR/examples/pretrained_models/FireRedASR-LLM-L/asr_encoder.pth.tar
llm_dir /root/share/FireRedASR/examples/pretrained_models/FireRedASR-LLM-L/Qwen2-7B-Instruct
model args: Namespace(input_length_max=30.0, input_length_min=0.1, output_length_max=150, output_length_min=1, freeze_encoder=0, encoder_downsample_rate=2, freeze_llm=0, use_flash_attn=0, use_lora=1, unk='eager; unexpected results may be encountered.
Loading checkpoint shards: 25%|███████████▎ | 1/4 [00:00<00:01, 2.40it/s]
Traceback (most recent call last):
File "/root/share/FireRedASR/examples/fireredasr/speech2text.py", line 110, in
- ref=wav/text
- wer.py --print_sentence_wer 1 --do_tn 0 --rm_special 1 --ref wav/text --hyp out/llm-l-asr.txt
- tail -n8 out/llm-l-asr.txt.wer
解决了吗
没有