vosk-api
vosk-api copied to clipboard
Setup fine-tuning script for the models
As in
https://github.com/daanzu/kaldi-active-grammar/issues/33
https://github.com/gooofy/zamia-speech/issues/106
can you provide the data/loca/dict for this model http://alphacephei.com/vosk/models/vosk-model-small-en-us-0.3.zip i'll help you write the script that downloads the dict and this model and fine tunes the data/train folder and outputs vosk-model-small-en-us-new
Could you also provide it for "vosk-model-small-es-0.3"? Thank you very much. I'm trying to fine-tune it and after that, I'll document the process.
Better implementation here:
https://github.com/aarora8/kaldi2/blob/opensat_oct2020/egs/OpenSAT2020/s5/local/chain/run_finetune_tl.sh
@nshmyrev I think this is a good and new example. what your opinion? https://github.com/kaldi-asr/kaldi/blob/master/egs/libri_css/s5_mono/local/chain/tuning/run_tdnn_1d_ft.sh
@nshmyrev I am trying to adapt a trained model on Indian English accent to wake word data. I have set up the dataset as per the KALDI format. I am not able to understand how should I change my paths and give the dataset and model paths in it.
@nshmyrev could you please provide data/lang, data/local/lang, chain tree-dir for Indian English vosk zip folder ?
More straigthforward gist:
https://gist.github.com/daanzu/d29e18abb9e21ccf1cddc8c3e28054ff#file-run_finetune_tdnn_1a_daanzu-sh
@nshmyrev can you please provide the necessary files needed for finetuning in daanzu's script for the Indian english accent model
Another useful link
https://github.com/zhaoyi2/CVTE_chain_model_finetune
Hi :)
is it planned to complete the documentation on acoustic model finetuning (here: https://alphacephei.com/vosk/adaptation) ? The procedure seems for now very unclear... For example:
- is the finetuning possible only for a few models or it is possible with all of them ?
- how am I supposed to organized files in the model to run finetuning ? because it seems differents as in Kaldi organization and I'm not sure of what I'm doing...
Hi again, Everybody is asking for input files to finetune. When will they be released?
P.S. I don`t quite understand. Since help needed since august, but no upload of files you surely currently have.
Hi I am also working on the fine-tuning part on indian english vosk model. Can anyone please quide me with the information of preparing a proper documentation or the steps to follow?
Also, @Ashutosh1995 i read one of your threads on this issue, have you got any success with that. Can you please discuss?
Thanks
So, by #773
you need lats with nnet3/align_lats.sh
align_lats.sh takes feats.scp as input, where could i find that?
align_lats.sh takes feats.scp as input, where could i find that?
Feats are created with make_mfcc.sh from the data folder with wav.scp/segments
@nshmyrev so i basically need to extract feats from the data on which the model was trained, am i right?
@nshmyrev so i basically need to extract feats from the data on which the model was trained, am i right?
From adaptation data, you do not need training data
From adaptation data, you do not need training data
Ah, ok, now i get it! Thank you
@nshmyrev by any chance is any video tutorial to fine tune kaldi or vosk models. It would be great. Thanks
Hi @nshmyrev I'm trying to finetune the us-english model. It requires vosk-model-en-us-0.22-compile/exp/finetune_ali directory to consist of final.mdl, ali.*.gz and tree file. I have got these file for the data with which I'm trying to finetune. But the data previously used to train the model is not available to pubic from alphacep.
I got this data from Kaldi model for which I was trying to train from scratch from kaldi/egs/mini_librispeech/s5/exp/mono directory. Can I actually use the files from this directory or can I use files from other directories such as tri3b, tri2b etc ? Note: I used the same data to train and also using the same to finetune the US-English model.
Whether the data used while training model is also required? Also final.mdl is initially only available in ./exp/chain/tdnn/final.mdl. Can I use the same for ./exp/nnet3/tdnn_sp/ and ./exp/finetune_ali/ directories
@Archan2607 apologies for the late reply but I was temporarily involved in the ASR training and couldn't work out the training part completely.
@vikraman22 please take a note that we do not have official finetuning tutorial, so that has to be trial and error path.
For trial and error you'd better ask one question a time and try to solve simple question yourself, there no need to ask me to do simple things.
Your chances to get help increase if you submit a documentation on finetuning and finetuning setup as a pull request to our codebase just like we have part on training.
Hi! I am trying to finetune the model vosk-model-ru-0.22. I use "run_finetune_tdnn_1a_daanzu.sh" script for this, and I am missing files ali.*.gz. How can I generate them?
I tried using "steps/nnet3/align.sh" script, but got error
ERROR (apply-cmvn[5.5.1009~1-e4940]:Value():util/kaldi-table-inl.h:164) Failed to load object from /home/shmyrev/kaldi/egs/ac/vosk-model-ru-0.22-compile/mfcc/raw_mfcc_test_sova_devices.1.ark:41 (to suppress this error, add the permissive (p, ) option to the rspecifier.
How can I generate them?
With steps/nnet3/align.sh
but got error
There must be earlier error since feature files are missing
Hello! I am also trying to run daanzu's finetuning script to finetune the German model vosk-model-de-0.21 and am looking for the ali.*.gz files. I had a look at steps/nnet3/align.sh, as suggested in the previous response, but if I understand correctly, that script requires the data-dir - as in data/train - to run, which is not present in the downloaded model. Could you provide the ali.*.gz files or indicate which directory to use as the data-dir? Thank you very much in advance!
indicate which directory to use as the data-dir
The one with your audio samples you going to use for fine-tuning
Thank you for your reply! I read in this comment of the finetuning discussion, that if the alignment files are generated from the very small amount of finetuning data, as opposed to the large amount of training data, they might be of far inferior quality. This dependency seems to have been confirmed in daanzu's reply, who then provided the alignment files for the english model. This is why I thought that the initial alignment files are necessary.
they might be of far inferior quality.
No, it is wrong. The alignment is just timestamps of phonemes. It doesn't depend on amount of data.
You do not need original model alignment to finetune.
Ok great, thanks very much for your help!