vosk-api icon indicating copy to clipboard operation
vosk-api copied to clipboard

Training custom models using Vosk

Open KatPro opened this issue 4 years ago • 29 comments

Hello! Is it possible to train our own custom models like these: https://github.com/alphacep/kaldi-android-demo/releases using Vosk? What steps shouls we take after the index database is filled with data? Thank you!

KatPro avatar Mar 01 '20 16:03 KatPro

@KatPro models are trained with Kaldi. Follow standard kaldi training scripts, for example, mini_librispeech example.

nshmyrev avatar Mar 01 '20 16:03 nshmyrev

Thank you! And is it possible to train the model for another language following Kaldi training scripts?

KatPro avatar Mar 02 '20 07:03 KatPro

Reopen to increase visibility

nshmyrev avatar Apr 28 '20 09:04 nshmyrev

Documentation about process https://github.com/alphacep/vosk-api/blob/master/doc/models.md#training-your-own-model

nshmyrev avatar May 01 '20 23:05 nshmyrev

Hi @nshmyrev, Will it be possible to have a "simple" script that take simple input folder with wav and csv files to do all the work to create a model?

nyroDev avatar May 02 '20 07:05 nyroDev

Will it be possible to have a "simple" script that take simple input folder with wav and csv files to do all the work to create a model?

Sure, it is called mini_librispeech recipe. It is in kaldi/egs/mini_librispeech/s5/run.sh

nshmyrev avatar May 02 '20 07:05 nshmyrev

First of all: thanks for the Android library! I'm testing it in https://github.com/swentel/solfidola, and it actually works pretty great!

The way I use it in my app is as voice commands. Some words trigger an action. So I was wondering whether I could have a model which only consists of a few words. I basically only need to recognize words like : 'one', 'two', 'three' and 'play'. I don't care about other words as they don't trigger anything in the app.

I'm currently installing kaldi (make is compiling hehe), and then going to try and figure out if I can create a model with only a couple of words.

But I wonder: does this idea sense, and will the model size in the end be smaller? I'd rather don't want to ship 30mb for only a few words to recognize.

I'll write down steps if I can figure out myself, but any more detailed steps to create such a model would be awesome, but no worries if that's hard to write down in a few lines :)

swentel avatar May 04 '20 17:05 swentel

@swentel you can just rebuild the graph, see

https://github.com/alphacep/vosk-api/blob/master/doc/adaptation.md

You can also select words in runtime, see

https://github.com/alphacep/vosk-api/blob/master/python/example/test_words.py

let me know if you have further questions

nshmyrev avatar May 04 '20 17:05 nshmyrev

@swentel also see https://github.com/alphacep/vosk-api/issues/55

nshmyrev avatar May 04 '20 17:05 nshmyrev

Oooh, great, thanks for the quick answer!

I'll get cracking at it after dinner. This will be awesome if it works, and I'm going to write a blog post about it, because the world needs to know about this :)

swentel avatar May 04 '20 17:05 swentel

Thank you @swentel, let me know how it goes!

nshmyrev avatar May 04 '20 17:05 nshmyrev

So this actually seemed to work!

Based on the adaption readme, both commands are running, although I'm not a 100% sure what the first command does (fstsymbols ...)

However, when running the second command with the text file with my custom words in, Gr.fst is now only 2.6MB (compared to 23MB) , completely reinstalled the app again on my phone and it still works. Saved 20Mb, that's great!

So looking in the model directory, I still see a couple of files which are 'relatively' large:

  • final.mdl (14mb)
  • HCLr.fst (5.9mb)
  • ivector/final.ie (8mb)

I was wondering: can I do something with those too? Or even better, are they even needed for the recognizer to work? (To be honest, I could have tested that myself of course already by deploying a new version and leaving those files out)

(I'm almost sorry for what I guess are newbie questions, completely new to kaldi, but super excited it works!)

swentel avatar May 04 '20 19:05 swentel

Or even better, are they even needed for the recognizer to work? (To be honest, I could have tested that myself of course already by deploying a new version and leaving those files out)

Those files are still needed.

nshmyrev avatar May 04 '20 19:05 nshmyrev

Ok, cool, thanks!

swentel avatar May 04 '20 19:05 swentel

Published a blog post at https://realize.be/blog/offline-speech-text-trigger-custom-commands-android-kaldi-and-vosk

In case I made some stupid mistakes, do let me know ;)

swentel avatar May 07 '20 07:05 swentel

@swentel amazing, thanks a lot!

nshmyrev avatar May 08 '20 14:05 nshmyrev

Related #314

nshmyrev avatar Feb 16 '21 17:02 nshmyrev

How do we structure the words.txt file for adaptation?

trying with

covid-19 coronavirus

in my words.txt file I get:

SymbolTable::ReadText: Bad non-negative integer "coronavirus"

dazzzed avatar Feb 18 '21 17:02 dazzzed

The command mentioned here to create a new language model does exist in default compile of kaldi. egs directory iis empty in default compile.

plehal avatar Dec 18 '21 21:12 plehal

Will it be possible to have a "simple" script that take simple input folder with wav and csv files to do all the work to create a model?

Sure, it is called mini_librispeech recipe. It is in kaldi/egs/mini_librispeech/s5/run.sh

I tried 1. used the mini_librispeech recipe and generated some files,and 2. tried to arrange the files as part of “Model Structure” in https://alphacephei.com/vosk/models.

But I have generated many files with the same name after the first step, like for ‘final.mdl’, I have exp/mono/final.mdl, exp/tri2b/final.mdl, exp/tri1/final.mdl, etc. I don't know which file should I put into the final structure. Any suggestions?

jipinhetundu avatar Apr 25 '22 16:04 jipinhetundu

We actually have our new recipe:

https://github.com/alphacep/vosk-api/tree/master/training

trained model is in exp/chain/tdnn.

nshmyrev avatar Apr 25 '22 16:04 nshmyrev

We actually have our new recipe:

https://github.com/alphacep/vosk-api/tree/master/training

trained model is in exp/chain/tdnn.

Thanks a lot for your answer!

I followed your steps and tried running the new recipe, but ran into a small problem at line 28 of run.sh. it reminds me that this is not the correct usage

local/prepare_dict.sh data/local/lm data/local/dict Usage: local/prepare_dict.sh [options] <lm-dir><g2p-model-dir> <dst-dir> ............

I observed the writing of the corresponding part of mini_librispeech/s5/run.sh. This file is written as local/prepare_dict.sh --stage 3 --nj 30 --cmd "$train_cmd"
data/local/lm data/local/lm data/local/dict_nosp

So I changed the corresponding part to

  1. local/prepare_dict.sh data/local/lm data/local/lm data/local/dict
  2. local/prepare_dict.sh --stage 3 --nj 30 data/local/lm data/local/lm data/local/dict
  3. local/prepare_dict.sh --stage 3 --nj 30 --cmd "$train_cmd" data/local/lm data/local/lm data/local/dict

1 and 2 correspond to different outputs, while 3 reports an error. I don't know much about kaldi and I'm not sure if it's due to different versions. I updated the latest version of kaldi three days ago. I want to know what I should do next?

jipinhetundu avatar Apr 26 '22 09:04 jipinhetundu

Usage: local/prepare_dict.sh [options]

Seems like you are not using local/prepare_dict.sh from our recipe, you should have old file. Our one doesn't have any options like in the message:

https://github.com/alphacep/vosk-api/blob/master/training/local/prepare_dict.sh

nshmyrev avatar Apr 27 '22 00:04 nshmyrev

好像你没有使用我们食谱中的 local/prepare_dict.sh,你应该有旧文件。我们的没有消息中的任何选项:

https://github.com/alphacep/vosk-api/blob/master/training/local/prepare_dict.sh

Looks like I asked a stupid question. Thank you for answering patiently, I have successfully run it!

jipinhetundu avatar Apr 27 '22 04:04 jipinhetundu

Also @nshmyrev what are the changes should I make to produce models of high accuracy if I'm training a model from scratch? You have suggested to train ivector of dim 40 to save memory but does this affect accuracy?

It will also helpful if you could the share the directories to look for in order to build the final_result model compatible for vosk that has files such as final.mdl, final.ie, conf etc...

Manikandan18M avatar May 03 '22 12:05 Manikandan18M

@nshmyrev

Manikandan18M avatar May 04 '22 04:05 Manikandan18M

Also @nshmyrev what are the changes should I make to produce models of high accuracy if I'm training a model from scratch?

It depends on too many factors - domain of speech, amount of audio, amount of GPUs. It is hard to guess.

nshmyrev avatar May 04 '22 20:05 nshmyrev

Documentation about process https://github.com/alphacep/vosk-api/blob/master/doc/models.md#training-your-own-model

not able to open this url

ankur995 avatar Oct 23 '23 11:10 ankur995

@ankur995 yes, it is obsolete. Our training setup is here:

https://github.com/alphacep/vosk-api/tree/master/training

There is colab:

https://github.com/alphacep/vosk-api/blob/master/python/example/colab/vosk-training.ipynb

nshmyrev avatar Oct 23 '23 20:10 nshmyrev