seamless_communication icon indicating copy to clipboard operation
seamless_communication copied to clipboard

ASR inference

Open YKoustubhRao opened this issue 1 year ago • 0 comments

The following error creeps up, while:

m4t_predict /workspace/english/data/lib_light/100_sea_fairies_0812_librivox_64kb_mp3_01_baum_sea_fairies_64kb_0.wav --task asr --tgt_lang "eng" --model_name seamlessM4T_v2_large

warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.") 2024-01-05 16:53:03,472 INFO -- seamless_communication.cli.m4t.predict.predict: text_generation_opts=SequenceGeneratorOptions(beam_size=5, soft_max_seq_len=(1, 200), hard_max_seq_len=1024, step_processor=None, unk_penalty=0.0, len_penalty=1.0) 2024-01-05 16:53:03,474 INFO -- seamless_communication.cli.m4t.predict.predict: unit_generation_opts=SequenceGeneratorOptions(beam_size=5, soft_max_seq_len=(25, 50), hard_max_seq_len=1024, step_processor=None, unk_penalty=0.0, len_penalty=1.0) 2024-01-05 16:53:03,474 INFO -- seamless_communication.cli.m4t.predict.predict: unit_generation_ngram_filtering=False 2024-01-05 16:53:03,479 WARNING -- seamless_communication.inference.translator: Transposing audio tensor from (bsz, seq_len) -> (seq_len, bsz). Traceback (most recent call last): File "/opt/conda/bin/m4t_predict", line 8, in sys.exit(main()) File "/opt/conda/lib/python3.10/site-packages/seamless_communication/cli/m4t/predict/predict.py", line 226, in main text_output, speech_output = translator.predict( File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/seamless_communication/inference/translator.py", line 319, in predict texts, units = self.get_prediction( File "/opt/conda/lib/python3.10/site-packages/seamless_communication/inference/translator.py", line 188, in get_prediction return generator( File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/seamless_communication/inference/generator.py", line 261, in call texts, text_gen_output = self.s2t_converter.batch_convert( File "/opt/conda/lib/python3.10/site-packages/fairseq2/generation/text.py", line 152, in batch_convert return self._do_convert(source_seqs, source_padding_mask) File "/opt/conda/lib/python3.10/site-packages/fairseq2/generation/text.py", line 99, in _do_convert raise RuntimeError( RuntimeError: The sequence generator returned no hypothesis at index 0. Please file a bug report.

YKoustubhRao avatar Jan 05 '24 12:01 YKoustubhRao