When recognizing, if a sentence is completely silent, blank skip will reduce all frame. Exception occurs:
Traceback (most recent call last):
File "/data/k2/icefall/egs/xxxx/Rework/./pruned_transducer_stateless7_ctc_bs/ctc_guide_decode_bs.py", line 847, in
main()
File "/data/k2/miniconda3/envs/k2-1080ti/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/data/k2/icefall/egs/xxxx/Rework/./pruned_transducer_stateless7_ctc_bs/ctc_guide_decode_bs.py", line 828, in main
results_dict = decode_dataset(
File "/data/k2/icefall/egs/xxxx/Rework/./pruned_transducer_stateless7_ctc_bs/ctc_guide_decode_bs.py", line 571, in decode_dataset
hyps_dict = decode_one_batch(
File "/data/k2/icefall/egs/xxxx/Rework/./pruned_transducer_stateless7_ctc_bs/ctc_guide_decode_bs.py", line 464, in decode_one_batch
hyp_tokens = greedy_search_batch(
File "/data/k2/icefall/egs/xxxx/Rework/pruned_transducer_stateless7_ctc_bs/beam_search.py", line 633, in greedy_search_batch
packed_encoder_out = torch.nn.utils.rnn.pack_padded_sequence(
File "/data/k2/miniconda3/envs/k2-1080ti/lib/python3.9/site-packages/torch/nn/utils/rnn.py", line 262, in pack_padded_sequence
_VF._pack_padded_sequence(input, lengths, batch_first)
RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0
Add code to fix (ensure at least one frame exist after frame reduce).
@yfyeung Could you help review this PR?
@drawfish Thanks for your suggestion.
This model is in the LibriSpeech, whose test set does not have entirely silent sentences.
IMO, you should modify the code of the export model.
@yfyeung
Shall we merge this? Do you have any other comments?