kaldi-active-grammar
kaldi-active-grammar copied to clipboard
On what kind of datasets does the model trained on?
What are the datasets does this kaldi active grammar model trained on? If you would have included public datasets, could you name them? The pretrained model which you mentioned, is that Zamia speech model?
I was also curious about this. According to here (cf., stage 2
) it should be: Librispeech, TEDLIUM, Mozilla's Commonvoice, Tatoeba, Tensorflow's speech_commands.
Actually, daanzu_multi_en
is a partial and unfinished training pipeline. I have ended up working with a heavily modified version of the Zamia pipeline. The datasets are:
- Common Voice
- Common Voice single word
- Librispeech
- LJ Speech
- M-AILabs
- Google Speech Commands
- Tatoeba
- TedLIUM3
- Voxforge
- A collection of TTS I generated
Actually,
daanzu_multi_en
is a partial and unfinished training pipeline. I have ended up working with a heavily modified version of the Zamia pipeline. The datasets are:
- Common Voice
- Common Voice single word
- Librispeech
- LJ Speech
- M-AILabs
- Google Speech Commands
- Tatoeba
- TedLIUM3
- Voxforge
- A collection of TTS I generated
How about kaldi_model_daanzu_20211030-biglm? Also trained on these datasets?
@zhouyong64
How about kaldi_model_daanzu_20211030-biglm? Also trained on these datasets?
Yes, the new model is trained on the same datasets. The major change is that it now includes models necessary for running g2p_en for local pronunciation generation.