fairseq
fairseq copied to clipboard
Evaluating a CTC model
❓ Questions and Help
Before asking:
- search the issues.
- search the docs.
What is your question?
wav2vec 2.0 Evaluate CTC model run after fine-tuning
python examples/speech_recognition/infer.py /checkpoint/abaevski/data/speech/libri/10h/wav2vec/raw --task audio_finetuning
--nbest 1 --path /path/to/model --gen-subset $subset --results-path /path/to/save/results/for/sclite --w2l-decoder kenlm
--lm-model /path/to/kenlm.bin --lm-weight 2 --word-score -1 --sil-weight 0 --criterion ctc --labels ltr --max-tokens 4000000
--post-process letter
I would like to ask what /checkpoint/abaevski/data/speech/libri/10h/wav2vec/raw stands for;
What does the path in --lm-model /path/to/kenlm.bin represent;
What does --word-score -1 represent;
What does --sil-weight 0 represent;
What does --criterion ctc stand for;
What does --labels ltr stand for;
What does --post-process represent;