vosk-api icon indicating copy to clipboard operation
vosk-api copied to clipboard

I followed your steps for training, now how i get files to create model

Open mukeshbadgujar opened this issue 2 years ago • 2 comments

i followed https://github.com/alphacep/vosk-api/tree/master/training

Please tell me which files i need to create model, please tell me steps.

i am also creating documentation for contribution.

please check if there is any error or not.

i just removed some same lines in output because github gives character limit error.

user@user:~/Documents/ML_Projects/vosk-api/kaldi/egs/wsj/training$ ./run.sh 
Preparing phone lists and lexicon
utils/prepare_lang.sh data/local/dict <UNK> data/local/lang data/lang
Checking data/local/dict/silence_phones.txt ...
--> reading data/local/dict/silence_phones.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/silence_phones.txt is OK

Checking data/local/dict/optional_silence.txt ...
--> reading data/local/dict/optional_silence.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/optional_silence.txt is OK

Checking data/local/dict/nonsilence_phones.txt ...
--> reading data/local/dict/nonsilence_phones.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/nonsilence_phones.txt is OK

Checking disjoint: silence_phones.txt, nonsilence_phones.txt
--> disjoint property is OK.

Checking data/local/dict/lexicon.txt
--> reading data/local/dict/lexicon.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/lexicon.txt is OK

Checking data/local/dict/lexiconp.txt
--> reading data/local/dict/lexiconp.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/lexiconp.txt is OK

Checking lexicon pair data/local/dict/lexicon.txt and data/local/dict/lexiconp.txt
--> lexicon pair data/local/dict/lexicon.txt and data/local/dict/lexiconp.txt match

Checking data/local/dict/extra_questions.txt ...
--> data/local/dict/extra_questions.txt is empty (this is OK)
--> SUCCESS [validating dictionary directory data/local/dict]

fstaddselfloops data/lang/phones/wdisambig_phones.int data/lang/phones/wdisambig_words.int 
prepare_lang.sh: validating output directory
utils/validate_lang.pl data/lang
Checking existence of separator file
separator file data/lang/subword_separator.txt is empty or does not exist, deal in word case.
Checking data/lang/phones.txt ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/lang/phones.txt is OK

Checking words.txt: #0 ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/lang/words.txt is OK

Checking disjoint: silence.txt, nonsilence.txt, disambig.txt ...
--> silence.txt and nonsilence.txt are disjoint
--> silence.txt and disambig.txt are disjoint
--> disambig.txt and nonsilence.txt are disjoint
--> disjoint property is OK

Checking sumation: silence.txt, nonsilence.txt, disambig.txt ...
--> found no unexplainable phones in phones.txt

Checking data/lang/phones/context_indep.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 10 entry/entries in data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.int corresponds to data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.csl corresponds to data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.{txt, int, csl} are OK

Checking data/lang/phones/nonsilence.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 156 entry/entries in data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.int corresponds to data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.csl corresponds to data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.{txt, int, csl} are OK

Checking data/lang/phones/silence.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 10 entry/entries in data/lang/phones/silence.txt
--> data/lang/phones/silence.int corresponds to data/lang/phones/silence.txt
--> data/lang/phones/silence.csl corresponds to data/lang/phones/silence.txt
--> data/lang/phones/silence.{txt, int, csl} are OK

Checking data/lang/phones/optional_silence.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.int corresponds to data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.csl corresponds to data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.{txt, int, csl} are OK

Checking data/lang/phones/disambig.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 21 entry/entries in data/lang/phones/disambig.txt
--> data/lang/phones/disambig.int corresponds to data/lang/phones/disambig.txt
--> data/lang/phones/disambig.csl corresponds to data/lang/phones/disambig.txt
--> data/lang/phones/disambig.{txt, int, csl} are OK

Checking data/lang/phones/roots.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 41 entry/entries in data/lang/phones/roots.txt
--> data/lang/phones/roots.int corresponds to data/lang/phones/roots.txt
--> data/lang/phones/roots.{txt, int} are OK

Checking data/lang/phones/sets.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 41 entry/entries in data/lang/phones/sets.txt
--> data/lang/phones/sets.int corresponds to data/lang/phones/sets.txt
--> data/lang/phones/sets.{txt, int} are OK

Checking data/lang/phones/extra_questions.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 9 entry/entries in data/lang/phones/extra_questions.txt
--> data/lang/phones/extra_questions.int corresponds to data/lang/phones/extra_questions.txt
--> data/lang/phones/extra_questions.{txt, int} are OK

Checking data/lang/phones/word_boundary.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 166 entry/entries in data/lang/phones/word_boundary.txt
--> data/lang/phones/word_boundary.int corresponds to data/lang/phones/word_boundary.txt
--> data/lang/phones/word_boundary.{txt, int} are OK

Checking optional_silence.txt ...
--> reading data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.txt is OK

Checking disambiguation symbols: #0 and #1
--> data/lang/phones/disambig.txt has "#0" and "#1"
--> data/lang/phones/disambig.txt is OK

Checking topo ...

Checking word_boundary.txt: silence.txt, nonsilence.txt, disambig.txt ...
--> data/lang/phones/word_boundary.txt doesn't include disambiguation symbols
--> data/lang/phones/word_boundary.txt is the union of nonsilence.txt and silence.txt
--> data/lang/phones/word_boundary.txt is OK

Checking word-level disambiguation symbols...
--> data/lang/phones/wdisambig.txt exists (newer prepare_lang.sh)
Checking word_boundary.int and disambig.int
--> generating a 79 word/subword sequence
--> resulting phone sequence from L.fst corresponds to the word sequence
--> L.fst is OK
--> generating a 16 word/subword sequence
--> resulting phone sequence from L_disambig.fst corresponds to the word sequence
--> L_disambig.fst is OK

Checking data/lang/oov.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang/oov.txt
--> data/lang/oov.int corresponds to data/lang/oov.txt
--> data/lang/oov.{txt, int} are OK

--> data/lang/L.fst is olabel sorted
--> data/lang/L_disambig.fst is olabel sorted
--> SUCCESS [validating lang directory data/lang]
steps/make_mfcc.sh --cmd run.pl --nj 10 data/train exp/make_mfcc/train
steps/make_mfcc.sh: moving data/train/feats.scp to data/train/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/train
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
steps/make_mfcc.sh: Succeeded creating MFCC features for train
steps/compute_cmvn_stats.sh data/train exp/make_mfcc/train
Succeeded creating CMVN stats for train
steps/train_mono.sh --nj 10 --cmd run.pl data/train data/lang exp/mono
steps/train_mono.sh: Initializing monophone system.
steps/train_mono.sh: Compiling training graphs
steps/train_mono.sh: Aligning data equally (pass 0)
steps/train_mono.sh: Pass 1
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 2
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 3
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 4
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 5
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 6
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 7
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 8
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 9
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 10
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 11
steps/train_mono.sh: Pass 12
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 13
steps/train_mono.sh: Pass 14
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 15
steps/train_mono.sh: Pass 16
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 17
steps/train_mono.sh: Pass 18
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 19
steps/train_mono.sh: Pass 20
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 21
steps/train_mono.sh: Pass 22
steps/train_mono.sh: Pass 23
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 24
steps/train_mono.sh: Pass 25
steps/train_mono.sh: Pass 26
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 27
steps/train_mono.sh: Pass 28
steps/train_mono.sh: Pass 29
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 30
steps/train_mono.sh: Pass 31
steps/train_mono.sh: Pass 32
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 33
steps/train_mono.sh: Pass 34
steps/train_mono.sh: Pass 35
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 36
steps/train_mono.sh: Pass 37
steps/train_mono.sh: Pass 38
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 39
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/mono
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 43.3352810564% of the time at utterance begin.  This may not be optimal.
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 51.5087063223% of the time at utterance end.  This may not be optimal.
steps/diagnostic/analyze_alignments.sh: see stats in exp/mono/log/analyze_alignments.log
1741840 warnings in exp/mono/log/acc.*.*.log
3518420 warnings in exp/mono/log/align.*.*.log
2 warnings in exp/mono/log/analyze_alignments.log
exp/mono: nj=10 align prob=-243.19 over 91.90h [retry=36.7%, fail=14.4%] states=127 gauss=1001
steps/train_mono.sh: Done training monophone system in exp/mono
steps/align_si.sh --nj 10 --cmd run.pl data/train data/lang exp/mono exp/mono_ali
steps/align_si.sh: feature type is delta
steps/align_si.sh: aligning data in data/train using model from exp/mono, putting alignments in exp/mono_ali
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/mono_ali
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 43.1685571582% of the time at utterance begin.  This may not be optimal.
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 51.449376838% of the time at utterance end.  This may not be optimal.
steps/diagnostic/analyze_alignments.sh: see stats in exp/mono_ali/log/analyze_alignments.log
steps/align_si.sh: done aligning data.
steps/train_lda_mllt.sh --cmd run.pl 2000 10000 data/train data/lang exp/mono_ali exp/tri1
steps/train_lda_mllt.sh: Accumulating LDA statistics.
steps/train_lda_mllt.sh: Accumulating tree stats
steps/train_lda_mllt.sh: Getting questions for tree clustering.
steps/train_lda_mllt.sh: Building the tree
steps/train_lda_mllt.sh: Initializing the model
steps/train_lda_mllt.sh: Converting alignments from exp/mono_ali to use current tree
steps/train_lda_mllt.sh: Compiling graphs of transcripts
Training pass 1
Training pass 2
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 3
Training pass 4
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 5
Training pass 6
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 7
Training pass 8
Training pass 9
Training pass 10
Aligning data
Training pass 11
Training pass 12
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 13
Training pass 14
Training pass 15
Training pass 16
Training pass 17
Training pass 18
Training pass 19
Training pass 20
Aligning data
Training pass 21
Training pass 22
Training pass 23
Training pass 24
Training pass 25
Training pass 26
Training pass 27
Training pass 28
Training pass 29
Training pass 30
Aligning data
Training pass 31
Training pass 32
Training pass 33
Training pass 34
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri1
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 36.2239808472% of the time at utterance begin.  This may not be optimal.
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 46.7424796613% of the time at utterance end.  This may not be optimal.
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri1/log/analyze_alignments.log
7143 warnings in exp/tri1/log/lda_acc.*.log
1638433 warnings in exp/tri1/log/acc.*.*.log
1 warnings in exp/tri1/log/update.*.log
2 warnings in exp/tri1/log/analyze_alignments.log
1 warnings in exp/tri1/log/build_tree.log
489716 warnings in exp/tri1/log/align.*.*.log
exp/tri1: nj=10 align prob=-48.04 over 96.59h [retry=27.1%, fail=9.8%] states=1680 gauss=10029 tree-impr=3.22 lda-sum=6.80 mllt:impr,logdet=0.77,1.07
steps/train_lda_mllt.sh: Done training system with LDA+MLLT features in exp/tri1
steps/align_si.sh --nj 10 --cmd run.pl data/train data/lang exp/tri1 exp/tri1_ali
steps/align_si.sh: feature type is lda
steps/align_si.sh: aligning data in data/train using model from exp/tri1, putting alignments in exp/tri1_ali
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri1_ali
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 36.016676646% of the time at utterance begin.  This may not be optimal.
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 46.5305036258% of the time at utterance end.  This may not be optimal.
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri1_ali/log/analyze_alignments.log
steps/align_si.sh: done aligning data.
steps/train_lda_mllt.sh --cmd run.pl 2500 15000 data/train data/lang exp/tri1_ali exp/tri2
steps/train_lda_mllt.sh: Accumulating LDA statistics.
steps/train_lda_mllt.sh: Accumulating tree stats
steps/train_lda_mllt.sh: Getting questions for tree clustering.
steps/train_lda_mllt.sh: Building the tree
steps/train_lda_mllt.sh: Initializing the model
steps/train_lda_mllt.sh: Converting alignments from exp/tri1_ali to use current tree
steps/train_lda_mllt.sh: Compiling graphs of transcripts
Training pass 1
Training pass 2
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 3
Training pass 4
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 5
Training pass 6
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 7
Training pass 8
Training pass 9
Training pass 10
Aligning data
Training pass 11
Training pass 12
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 13
Training pass 14
Training pass 15
Training pass 16
Training pass 17
Training pass 18
Training pass 19
Training pass 20
Aligning data
Training pass 21
Training pass 22
Training pass 23
Training pass 24
Training pass 25
Training pass 26
Training pass 27
Training pass 28
Training pass 29
Training pass 30
Aligning data
Training pass 31
Training pass 32
Training pass 33
Training pass 34
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri2
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 35.4488086416% of the time at utterance begin.  This may not be optimal.
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 46.1002941903% of the time at utterance end.  This may not be optimal.
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri2/log/analyze_alignments.log
4896 warnings in exp/tri2/log/lda_acc.*.log
1 warnings in exp/tri2/log/build_tree.log
57502 warnings in exp/tri2/log/align.*.*.log
181624 warnings in exp/tri2/log/acc.*.*.log
2 warnings in exp/tri2/log/analyze_alignments.log
exp/tri2: nj=10 align prob=-48.78 over 95.41h [retry=26.6%, fail=10.9%] states=2088 gauss=15028 tree-impr=4.16 lda-sum=15.78 mllt:impr,logdet=0.84,1.19
steps/train_lda_mllt.sh: Done training system with LDA+MLLT features in exp/tri2
steps/align_si.sh --nj 10 --cmd run.pl data/train data/lang exp/tri2 exp/tri2_ali
steps/align_si.sh: feature type is lda
steps/align_si.sh: aligning data in data/train using model from exp/tri2, putting alignments in exp/tri2_ali
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri2_ali
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 35.1991557771% of the time at utterance begin.  This may not be optimal.
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 45.817055099% of the time at utterance end.  This may not be optimal.
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri2_ali/log/analyze_alignments.log
steps/align_si.sh: done aligning data.
steps/train_lda_mllt.sh --cmd run.pl 2500 20000 data/train data/lang exp/tri2_ali exp/tri3
steps/train_lda_mllt.sh: Accumulating LDA statistics.
steps/train_lda_mllt.sh: Accumulating tree stats
steps/train_lda_mllt.sh: Getting questions for tree clustering.
steps/train_lda_mllt.sh: Building the tree
steps/train_lda_mllt.sh: Initializing the model
steps/train_lda_mllt.sh: Converting alignments from exp/tri2_ali to use current tree
steps/train_lda_mllt.sh: Compiling graphs of transcripts
Training pass 1
Training pass 2
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 3
Training pass 4
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 5
Training pass 6
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 7
Training pass 8
Training pass 9
Training pass 10
Aligning data
Training pass 11
Training pass 12
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 13
Training pass 14
Training pass 15
Training pass 16
Training pass 17
Training pass 18
Training pass 19
Training pass 20
Aligning data
Training pass 21
Training pass 22
Training pass 23
Training pass 24
Training pass 25
Training pass 26
Training pass 27
Training pass 28
Training pass 29
Training pass 30
Aligning data
Training pass 31
Training pass 32
Training pass 33
Training pass 34
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri3
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 33.1143853335% of the time at utterance begin.  This may not be optimal.
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 45.331233832% of the time at utterance end.  This may not be optimal.
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri3/log/analyze_alignments.log
1 warnings in exp/tri3/log/build_tree.log
1 warnings in exp/tri3/log/update.*.log
191039 warnings in exp/tri3/log/acc.*.*.log
57639 warnings in exp/tri3/log/align.*.*.log
2 warnings in exp/tri3/log/analyze_alignments.log
5451 warnings in exp/tri3/log/lda_acc.*.log
exp/tri3: nj=10 align prob=-48.86 over 95.23h [retry=26.1%, fail=11.1%] states=2088 gauss=20029 tree-impr=4.34 lda-sum=17.19 mllt:impr,logdet=0.86,1.26
steps/train_lda_mllt.sh: Done training system with LDA+MLLT features in exp/tri3
steps/align_si.sh --nj 10 --cmd run.pl data/train data/lang exp/tri3 exp/tri3_ali
steps/align_si.sh: feature type is lda
steps/align_si.sh: aligning data in data/train using model from exp/tri3, putting alignments in exp/tri3_ali
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri3_ali
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 32.5647767366% of the time at utterance begin.  This may not be optimal.
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 45.1807905796% of the time at utterance end.  This may not be optimal.
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri3_ali/log/analyze_alignments.log
steps/align_si.sh: done aligning data.
local/chain/run_tdnn.sh 
local/chain/run_ivector_common.sh: computing a subset of data to train the diagonal UBM.
utils/data/subset_data_dir.sh: reducing #utt from 49989 to 12497
local/chain/run_ivector_common.sh: computing a PCA transform from the data.
steps/online/nnet2/get_pca_transform.sh --cmd run.pl --splice-opts --left-context=3 --right-context=3 --max-utts 10000 --subsample 2 exp/chain/diag_ubm/train_subset exp/chain/pca_transform
Done estimating PCA transform in exp/chain/pca_transform
local/chain/run_ivector_common.sh: training the diagonal UBM.
steps/online/nnet2/train_diag_ubm.sh --cmd run.pl --nj 10 --num-frames 700000 --num-threads 8 exp/chain/diag_ubm/train_subset 512 exp/chain/pca_transform exp/chain/diag_ubm
filter_scps.pl: warning: some input lines did not get output
steps/online/nnet2/train_diag_ubm.sh: Directory exp/chain/diag_ubm already exists. Backing up diagonal UBM in exp/chain/diag_ubm/backup.U4M
steps/online/nnet2/train_diag_ubm.sh: initializing model from E-M in memory, 
steps/online/nnet2/train_diag_ubm.sh: starting from 256 Gaussians, reaching 512;
steps/online/nnet2/train_diag_ubm.sh: for 20 iterations, using at most 700000 frames of data
Getting Gaussian-selection info
steps/online/nnet2/train_diag_ubm.sh: will train for 4 iterations, in parallel over
steps/online/nnet2/train_diag_ubm.sh: 10 machines, parallelized with 'run.pl'
steps/online/nnet2/train_diag_ubm.sh: Training pass 0
steps/online/nnet2/train_diag_ubm.sh: Training pass 1
steps/online/nnet2/train_diag_ubm.sh: Training pass 2
steps/online/nnet2/train_diag_ubm.sh: Training pass 3
local/chain/run_ivector_common.sh: training the iVector extractor
steps/online/nnet2/train_ivector_extractor.sh --cmd run.pl --nj 2 --ivector-dim 40 data/train exp/chain/diag_ubm exp/chain/extractor
steps/online/nnet2/train_ivector_extractor.sh: Directory exp/chain/extractor already exists. Backing up iVector extractor in exp/chain/extractor/backup.O1v
steps/online/nnet2/train_ivector_extractor.sh: doing Gaussian selection and posterior computation
Accumulating stats (pass 0)
Summing accs (pass 0)
Updating model (pass 0)
Accumulating stats (pass 1)
Summing accs (pass 1)
Updating model (pass 1)
Accumulating stats (pass 2)
Summing accs (pass 2)
Updating model (pass 2)
Accumulating stats (pass 3)
Summing accs (pass 3)
Updating model (pass 3)
Accumulating stats (pass 4)
Summing accs (pass 4)
Updating model (pass 4)
Accumulating stats (pass 5)
Summing accs (pass 5)
Updating model (pass 5)
Accumulating stats (pass 6)
Summing accs (pass 6)
Updating model (pass 6)
Accumulating stats (pass 7)
Summing accs (pass 7)
Updating model (pass 7)
Accumulating stats (pass 8)
Summing accs (pass 8)
Updating model (pass 8)
Accumulating stats (pass 9)
Summing accs (pass 9)
Updating model (pass 9)
utils/data/modify_speaker_info.sh: copied data from data/train to exp/chain/ivectors_train/train_max2, number of speakers changed from 49989 to 49989
utils/validate_data_dir.sh: Successfully validated data-directory exp/chain/ivectors_train/train_max2
steps/online/nnet2/extract_ivectors_online.sh --cmd run.pl --nj 10 exp/chain/ivectors_train/train_max2 exp/chain/extractor exp/chain/ivectors_train
steps/online/nnet2/extract_ivectors_online.sh: extracting iVectors
steps/online/nnet2/extract_ivectors_online.sh: combining iVectors across jobs
steps/online/nnet2/extract_ivectors_online.sh: done extracting (online) iVectors to exp/chain/ivectors_train using the extractor in exp/chain/extractor.
local/chain/run_tdnn.sh: creating lang directory data/lang_chain with chain-type topology
steps/align_fmllr_lats.sh --nj 20 --cmd run.pl data/train data/lang exp/tri3 exp/chain/tri3_train_lats
steps/align_fmllr_lats.sh: feature type is lda
steps/align_fmllr_lats.sh: compiling training graphs
steps/align_fmllr_lats.sh: aligning data in data/train using exp/tri3/final.mdl and speaker-independent features.
steps/align_fmllr_lats.sh: computing fMLLR transforms
steps/align_fmllr_lats.sh: generating lattices containing alternate pronunciations.
steps/align_fmllr_lats.sh: done generating lattices from training transcripts.
4412 warnings in exp/chain/tri3_train_lats/log/generate_lattices.*.log
37396 warnings in exp/chain/tri3_train_lats/log/fmllr.*.log
18498 warnings in exp/chain/tri3_train_lats/log/align_pass1.*.log
steps/nnet3/chain/build_tree.sh --frame-subsampling-factor 3 --context-opts --context-width=2 --central-position=1 --cmd run.pl 2500 data/train data/lang_chain exp/tri3_ali exp/chain/tree
steps/nnet3/chain/build_tree.sh: feature type is lda
steps/nnet3/chain/build_tree.sh: Initializing monophone model (for alignment conversion, in case topology changed)
steps/nnet3/chain/build_tree.sh: Accumulating tree stats
steps/nnet3/chain/build_tree.sh: Getting questions for tree clustering.
steps/nnet3/chain/build_tree.sh: Building the tree
steps/nnet3/chain/build_tree.sh: Initializing the model
steps/nnet3/chain/build_tree.sh: Converting alignments from exp/tri3_ali to use current tree
steps/nnet3/chain/build_tree.sh: Done building tree
local/chain/run_tdnn.sh: creating neural net configs using the xconfig parser
tree-info exp/chain/tree/tree 
steps/nnet3/xconfig_to_configs.py --xconfig-file exp/chain/tdnn/configs/network.xconfig --config-dir exp/chain/tdnn/configs/
nnet3-init exp/chain/tdnn/configs//ref.config exp/chain/tdnn/configs//ref.raw 
LOG (nnet3-init[5.5.1012~1-dd107]:main():nnet3-init.cc:80) Initialized raw neural net and wrote it to exp/chain/tdnn/configs//ref.raw
nnet3-info exp/chain/tdnn/configs//ref.raw 
nnet3-init exp/chain/tdnn/configs//ref.config exp/chain/tdnn/configs//ref.raw 
LOG (nnet3-init[5.5.1012~1-dd107]:main():nnet3-init.cc:80) Initialized raw neural net and wrote it to exp/chain/tdnn/configs//ref.raw
nnet3-info exp/chain/tdnn/configs//ref.raw 
/home/user/Documents/ML_Projects/vosk-api/kaldi/egs/wsj/training/steps/libs/common.py:127: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if p.returncode is not 0:
/home/user/Documents/ML_Projects/vosk-api/kaldi/egs/wsj/training/steps/libs/common.py:147: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if p.returncode is not 0:
/home/user/Documents/ML_Projects/vosk-api/kaldi/egs/wsj/training/steps/libs/common.py:203: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if popen_object.returncode is not 0:
2022-05-12 21:43:32,423 [steps/nnet3/chain/train.py:35 - <module> - INFO ] Starting chain model trainer (train.py)
steps/nnet3/chain/train.py --stage -10 --cmd run.pl --feat.online-ivector-dir exp/chain/ivectors_train --feat.cmvn-opts --norm-means=false --norm-vars=false --chain.xent-regularize 0.1 --chain.leaky-hmm-coefficient 0.1 --chain.l2-regularize 0.0 --chain.apply-deriv-weights false --chain.lm-opts=--num-extra-lm-states=2000 --egs.cmd run.pl --egs.dir  --egs.stage -10 --egs.opts --frames-overlap-per-eg 0 --constrained false --egs.chunk-width 140,100,160 --trainer.dropout-schedule 0,[email protected],[email protected],0 --trainer.add-option=--optimization.memory-compression-level=2 --trainer.num-chunk-per-minibatch 64 --trainer.frames-per-iter 2500000 --trainer.num-epochs 20 --trainer.optimization.num-jobs-initial 1 --trainer.optimization.num-jobs-final 1 --trainer.optimization.initial-effective-lrate 0.001 --trainer.optimization.final-effective-lrate 0.0001 --trainer.max-param-change 2.0 --cleanup.remove-egs true --feat-dir data/train --tree-dir exp/chain/tree --lat-dir exp/chain/tri3_train_lats --dir exp/chain/tdnn
['steps/nnet3/chain/train.py', '--stage', '-10', '--cmd', 'run.pl', '--feat.online-ivector-dir', 'exp/chain/ivectors_train', '--feat.cmvn-opts', '--norm-means=false --norm-vars=false', '--chain.xent-regularize', '0.1', '--chain.leaky-hmm-coefficient', '0.1', '--chain.l2-regularize', '0.0', '--chain.apply-deriv-weights', 'false', '--chain.lm-opts=--num-extra-lm-states=2000', '--egs.cmd', 'run.pl', '--egs.dir', '', '--egs.stage', '-10', '--egs.opts', '--frames-overlap-per-eg 0 --constrained false', '--egs.chunk-width', '140,100,160', '--trainer.dropout-schedule', '0,[email protected],[email protected],0', '--trainer.add-option=--optimization.memory-compression-level=2', '--trainer.num-chunk-per-minibatch', '64', '--trainer.frames-per-iter', '2500000', '--trainer.num-epochs', '20', '--trainer.optimization.num-jobs-initial', '1', '--trainer.optimization.num-jobs-final', '1', '--trainer.optimization.initial-effective-lrate', '0.001', '--trainer.optimization.final-effective-lrate', '0.0001', '--trainer.max-param-change', '2.0', '--cleanup.remove-egs', 'true', '--feat-dir', 'data/train', '--tree-dir', 'exp/chain/tree', '--lat-dir', 'exp/chain/tri3_train_lats', '--dir', 'exp/chain/tdnn']
2022-05-12 21:43:32,428 [steps/nnet3/chain/train.py:284 - train - INFO ] Arguments for the experiment
{'alignment_subsampling_factor': 3,
 'apply_deriv_weights': False,
 'backstitch_training_interval': 1,
 'backstitch_training_scale': 0.0,
 'chain_opts': '',
 'chunk_left_context': 0,
 'chunk_left_context_initial': -1,
 'chunk_right_context': 0,
 'chunk_right_context_final': -1,
 'chunk_width': '140,100,160',
 'cleanup': True,
 'cmvn_opts': '--norm-means=false --norm-vars=false',
 'combine_sum_to_one_penalty': 0.0,
 'command': 'run.pl',
 'compute_per_dim_accuracy': False,
 'deriv_truncate_margin': None,
 'dir': 'exp/chain/tdnn',
 'do_final_combination': True,
 'dropout_schedule': '0,[email protected],[email protected],0',
 'egs_command': 'run.pl',
 'egs_dir': None,
 'egs_nj': 0,
 'egs_opts': '--frames-overlap-per-eg 0 --constrained false',
 'egs_stage': -10,
 'email': None,
 'exit_stage': None,
 'feat_dir': 'data/train',
 'final_effective_lrate': 0.0001,
 'frame_subsampling_factor': 3,
 'frames_per_iter': 2500000,
 'initial_effective_lrate': 0.001,
 'input_model': None,
 'l2_regularize': 0.0,
 'lat_dir': 'exp/chain/tri3_train_lats',
 'leaky_hmm_coefficient': 0.1,
 'left_deriv_truncate': None,
 'left_tolerance': 5,
 'lm_opts': '--num-extra-lm-states=2000',
 'max_lda_jobs': 10,
 'max_models_combine': 20,
 'max_objective_evaluations': 30,
 'max_param_change': 2.0,
 'momentum': 0.0,
 'num_chunk_per_minibatch': '64',
 'num_epochs': 20.0,
 'num_jobs_final': 1,
 'num_jobs_initial': 1,
 'num_jobs_step': 1,
 'online_ivector_dir': 'exp/chain/ivectors_train',
 'preserve_model_interval': 100,
 'presoftmax_prior_scale_power': -0.25,
 'proportional_shrink': 0.0,
 'rand_prune': 4.0,
 'remove_egs': True,
 'reporting_interval': 0.1,
 'right_tolerance': 5,
 'samples_per_iter': 400000,
 'shrink_saturation_threshold': 0.4,
 'shrink_value': 1.0,
 'shuffle_buffer_size': 5000,
 'srand': 0,
 'stage': -10,
 'train_opts': ['--optimization.memory-compression-level=2'],
 'tree_dir': 'exp/chain/tree',
 'use_gpu': 'yes',
 'xent_regularize': 0.1}
2022-05-12 21:43:32,456 [steps/nnet3/chain/train.py:341 - train - INFO ] Creating phone language-model
2022-05-12 21:43:34,363 [steps/nnet3/chain/train.py:346 - train - INFO ] Creating denominator FST
copy-transition-model exp/chain/tree/final.mdl exp/chain/tdnn/0.trans_mdl 
LOG (copy-transition-model[5.5.1012~1-dd107]:main():copy-transition-model.cc:62) Copied transition model.
2022-05-12 21:43:35,035 [steps/nnet3/chain/train.py:382 - train - INFO ] Generating egs
steps/nnet3/chain/get_egs.sh --frames-overlap-per-eg 0 --constrained false --cmd run.pl --cmvn-opts --norm-means=false --norm-vars=false --online-ivector-dir exp/chain/ivectors_train --left-context 27 --right-context 27 --left-context-initial -1 --right-context-final -1 --left-tolerance 5 --right-tolerance 5 --frame-subsampling-factor 3 --alignment-subsampling-factor 3 --stage -10 --frames-per-iter 2500000 --frames-per-eg 140,100,160 --srand 0 data/train exp/chain/tdnn exp/chain/tri3_train_lats exp/chain/tdnn/egs
steps/nnet3/chain/get_egs.sh: Holding out 300 utterances in validation set and 300 in training diagnostic set, out of total 49989.
steps/nnet3/chain/get_egs.sh: creating egs.  To ensure they are not deleted later you can do:  touch exp/chain/tdnn/egs/.nodelete
steps/nnet3/chain/get_egs.sh: feature type is raw, with 'apply-cmvn'
tree-info exp/chain/tdnn/tree 
feat-to-dim scp:exp/chain/ivectors_train/ivector_online.scp - 
steps/nnet3/chain/get_egs.sh: working out number of frames of training data
steps/nnet3/chain/get_egs.sh: working out feature dim
steps/nnet3/chain/get_egs.sh: creating 16 archives, each with 14985 egs, with
steps/nnet3/chain/get_egs.sh:   140,100,160 labels per example, and (left,right) context = (27,27)
steps/nnet3/chain/get_egs.sh: Getting validation and training subset examples in background.
steps/nnet3/chain/get_egs.sh: Generating training examples on disk
steps/nnet3/chain/get_egs.sh: Getting subsets of validation examples for diagnostics and combination.
steps/nnet3/chain/get_egs.sh: recombining and shuffling order of archives on disk
steps/nnet3/chain/get_egs.sh: Removing temporary archives, alignments and lattices
steps/nnet3/chain/get_egs.sh: Finished preparing training examples
2022-05-12 21:44:48,469 [steps/nnet3/chain/train.py:431 - train - INFO ] Copying the properties from exp/chain/tdnn/egs to exp/chain/tdnn
2022-05-12 21:44:48,469 [steps/nnet3/chain/train.py:454 - train - INFO ] Preparing the initial acoustic model.
2022-05-12 21:44:48,739 [steps/nnet3/chain/train.py:487 - train - INFO ] Training will run for 20.0 epochs = 960 iterations
2022-05-12 21:44:48,739 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 0/959   Jobs: 1   Epoch: 0.00/20.0 (0.0% complete)   lr: 0.001000   
2022-05-12 21:45:14,288 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 1/959   Jobs: 1   Epoch: 0.02/20.0 (0.1% complete)   lr: 0.000998   
2022-05-12 21:45:34,829 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 2/959   Jobs: 1   Epoch: 0.04/20.0 (0.2% complete)   lr: 0.000995   
2022-05-12 21:45:55,693 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 3/959   Jobs: 1   Epoch: 0.06/20.0 (0.3% complete)   lr: 0.000993   
2022-05-12 21:46:16,417 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 4/959   Jobs: 1   Epoch: 0.08/20.0 (0.4% complete)   lr: 0.000990   
2022-05-12 21:46:37,271 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 5/959   Jobs: 1   Epoch: 0.10/20.0 (0.5% complete)   lr: 0.000988   
2022-05-12 21:46:58,264 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 6/959   Jobs: 1   Epoch: 0.12/20.0 (0.6% complete)   lr: 0.000986   
2022-05-12 21:47:18,914 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 7/959   Jobs: 1   Epoch: 0.15/20.0 (0.7% complete)   lr: 0.000983   
2022-05-12 21:47:40,045 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 8/959   Jobs: 1   Epoch: 0.17/20.0 (0.8% complete)   lr: 0.000981   
2022-05-12 21:48:00,772 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 9/959   Jobs: 1   Epoch: 0.19/20.0 (0.9% complete)   lr: 0.000979   
2022-05-12 21:48:21,401 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 10/959   Jobs: 1   Epoch: 0.21/20.0 (1.0% complete)   lr: 0.000976   
2022-05-12 21:48:42,254 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 11/959   Jobs: 1   Epoch: 0.23/20.0 (1.1% complete)   lr: 0.000974   
2022-05-12 21:49:02,967 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 12/959   Jobs: 1   Epoch: 0.25/20.0 (1.2% complete)   lr: 0.000972   
2022-05-12 21:49:23,566 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 13/959   Jobs: 1   Epoch: 0.27/20.0 (1.4% complete)   lr: 0.000969   
2022-05-12 21:49:44,214 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 14/959   Jobs: 1   Epoch: 0.29/20.0 (1.5% complete)   lr: 0.000967   
2022-05-12 21:50:05,064 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 15/959   Jobs: 1   Epoch: 0.31/20.0 (1.6% complete)   lr: 0.000965   
2022-05-12 21:50:25,853 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 16/959   Jobs: 1   Epoch: 0.33/20.0 (1.7% complete)   lr: 0.000962   
2022-05-12 21:50:46,246 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 17/959   Jobs: 1   Epoch: 0.35/20.0 (1.8% complete)   lr: 0.000960   
2022-05-12 21:51:07,165 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 18/959   Jobs: 1   Epoch: 0.38/20.0 (1.9% complete)   lr: 0.000958   
2022-05-12 21:51:28,136 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 19/959   Jobs: 1   Epoch: 0.40/20.0 (2.0% complete)   lr: 0.000955   
2022-05-12 21:51:48,700 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 20/959   Jobs: 1   Epoch: 0.42/20.0 (2.1% complete)   lr: 0.000953   
2022-05-12 21:52:15,183 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 21/959   Jobs: 1   Epoch: 0.44/20.0 (2.2% complete)   lr: 0.000951   
2022-05-12 21:52:36,342 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 22/959   Jobs: 1   Epoch: 0.46/20.0 (2.3% complete)   lr: 0.000949   
2022-05-12 21:52:57,016 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 23/959   Jobs: 1   Epoch: 0.48/20.0 (2.4% complete)   lr: 0.000946   
2022-05-12 21:53:17,775 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 24/959   Jobs: 1   Epoch: 0.50/20.0 (2.5% complete)   lr: 0.000944   
2022-05-12 21:53:38,530 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 25/959   Jobs: 1   Epoch: 0.52/20.0 (2.6% complete)   lr: 0.000942   
2022-05-12 21:53:59,345 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 26/959   Jobs: 1   Epoch: 0.54/20.0 (2.7% complete)   lr: 0.000940   
2022-05-12 21:54:19,954 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 27/959   Jobs: 1   Epoch: 0.56/20.0 (2.8% complete)   lr: 0.000937   
2022-05-12 21:54:40,398 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 28/959   Jobs: 1   Epoch: 0.58/20.0 (2.9% complete)   lr: 0.000935   
2022-05-12 21:55:01,123 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 29/959   Jobs: 1   Epoch: 0.60/20.0 (3.0% complete)   lr: 0.000933   
2022-05-12 21:55:21,882 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 30/959   Jobs: 1   Epoch: 0.62/20.0 (3.1% complete)   lr: 0.000931   
2022-05-12 21:55:42,407 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 31/959   Jobs: 1   Epoch: 0.65/20.0 (3.2% complete)   lr: 0.000928   
2022-05-12 21:56:03,175 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 32/959   Jobs: 1   Epoch: 0.67/20.0 (3.3% complete)   lr: 0.000926   
2022-05-12 21:56:23,764 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 33/959   Jobs: 1   Epoch: 0.69/20.0 (3.4% complete)   lr: 0.000924   
2022-05-12 21:56:44,308 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 34/959   Jobs: 1   Epoch: 0.71/20.0 (3.5% complete)   lr: 0.000922   
2022-05-12 21:57:04,985 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 35/959   Jobs: 1   Epoch: 0.73/20.0 (3.6% complete)   lr: 0.000919   
2022-05-12 21:57:25,656 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 36/959   Jobs: 1   Epoch: 0.75/20.0 (3.8% complete)   lr: 0.000917   
2022-05-12 21:57:46,615 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 37/959   Jobs: 1   Epoch: 0.77/20.0 (3.9% complete)   lr: 0.000915   
2022-05-12 21:58:07,497 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 38/959   Jobs: 1   Epoch: 0.79/20.0 (4.0% complete)   lr: 0.000913   
2022-05-12 21:58:28,079 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 39/959   Jobs: 1   Epoch: 0.81/20.0 (4.1% complete)   lr: 0.000911   
2022-05-12 21:58:49,089 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 40/959   Jobs: 1   Epoch: 0.83/20.0 (4.2% complete)   lr: 0.000909   
2022-05-12 21:59:15,382 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 41/959   Jobs: 1   Epoch: 0.85/20.0 (4.3% complete)   lr: 0.000906   
2022-05-12 21:59:35,995 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 42/959   Jobs: 1   Epoch: 0.88/20.0 (4.4% complete)   lr: 0.000904   
2022-05-12 21:59:56,837 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 43/959   Jobs: 1   Epoch: 0.90/20.0 (4.5% complete)   lr: 0.000902   
2022-05-12 22:00:17,523 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 44/959   Jobs: 1   Epoch: 0.92/20.0 (4.6% complete)   lr: 0.000900   
2022-05-12 22:00:37,961 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 45/959   Jobs: 1   Epoch: 0.94/20.0 (4.7% complete)   lr: 0.000898   
2022-05-12 22:00:58,855 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 46/959   Jobs: 1   Epoch: 0.96/20.0 (4.8% complete)   lr: 0.000896   
2022-05-12 22:01:19,600 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 47/959   Jobs: 1   Epoch: 0.98/20.0 (4.9% complete)   lr: 0.000893   
2022-05-12 22:01:40,237 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 48/959   Jobs: 1   Epoch: 1.00/20.0 (5.0% complete)   lr: 0.000891   
2022-05-12 22:02:00,557 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 49/959   Jobs: 1   Epoch: 1.02/20.0 (5.1% complete)   lr: 0.000889   
2022-05-12 22:02:21,159 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 50/959   Jobs: 1   Epoch: 1.04/20.0 (5.2% complete)   lr: 0.000887   
2022-05-12 22:02:42,075 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 51/959   Jobs: 1   Epoch: 1.06/20.0 (5.3% complete)   lr: 0.000885   
2022-05-12 22:03:02,544 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 52/959   Jobs: 1   Epoch: 1.08/20.0 (5.4% complete)   lr: 0.000883   
2022-05-12 22:03:23,218 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 53/959   Jobs: 1   Epoch: 1.10/20.0 (5.5% complete)   lr: 0.000881   
2022-05-12 22:03:44,316 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 54/959   Jobs: 1   Epoch: 1.12/20.0 (5.6% complete)   lr: 0.000879   
2022-05-12 22:04:04,880 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 55/959   Jobs: 1   Epoch: 1.15/20.0 (5.7% complete)   lr: 0.000876   
2022-05-12 22:04:25,628 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 56/959   Jobs: 1   Epoch: 1.17/20.0 (5.8% complete)   lr: 0.000874   
2022-05-12 22:04:46,324 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 57/959   Jobs: 1   Epoch: 1.19/20.0 (5.9% complete)   lr: 0.000872   
2022-05-12 22:05:07,063 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 58/959   Jobs: 1   Epoch: 1.21/20.0 (6.0% complete)   lr: 0.000870   
2022-05-12 22:05:27,660 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 59/959   Jobs: 1   Epoch: 1.23/20.0 (6.1% complete)   lr: 0.000868   
2022-05-12 22:05:48,131 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 60/959   Jobs: 1   Epoch: 1.25/20.0 (6.2% complete)   lr: 0.000866   
2022-05-12 22:06:14,326 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 61/959   Jobs: 1   Epoch: 1.27/20.0 (6.4% complete)   lr: 0.000864   
2022-05-12 22:06:35,122 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 62/959   Jobs: 1   Epoch: 1.29/20.0 (6.5% complete)   lr: 0.000862   
2022-05-12 22:06:55,651 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 63/959   Jobs: 1   Epoch: 1.31/20.0 (6.6% complete)   lr: 0.000860   
2022-05-12 22:07:16,332 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 64/959   Jobs: 1   Epoch: 1.33/20.0 (6.7% complete)   lr: 0.000858   
2022-05-12 22:07:36,802 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 65/959   Jobs: 1   Epoch: 1.35/20.0 (6.8% complete)   lr: 0.000856   
2022-05-12 22:07:57,249 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 66/959   Jobs: 1   Epoch: 1.38/20.0 (6.9% complete)   lr: 0.000854   
2022-05-12 22:08:17,997 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 67/959   Jobs: 1   Epoch: 1.40/20.0 (7.0% complete)   lr: 0.000852   
2022-05-12 22:08:38,643 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 68/959   Jobs: 1   Epoch: 1.42/20.0 (7.1% complete)   lr: 0.000850   
2022-05-12 22:08:59,452 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 69/959   Jobs: 1   Epoch: 1.44/20.0 (7.2% complete)   lr: 0.000847   
2022-05-12 22:09:20,254 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 70/959   Jobs: 1   Epoch: 1.46/20.0 (7.3% complete)   lr: 0.000845   
2022-05-12 22:09:40,952 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 71/959   Jobs: 1   Epoch: 1.48/20.0 (7.4% complete)   lr: 0.000843   
2022-05-12 22:10:01,898 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 72/959   Jobs: 1   Epoch: 1.50/20.0 (7.5% complete)   lr: 0.000841   
2022-05-12 22:10:22,491 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 73/959   Jobs: 1   Epoch: 1.52/20.0 (7.6% complete)   lr: 0.000839   
2022-05-12 22:10:43,057 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 74/959   Jobs: 1   Epoch: 1.54/20.0 (7.7% complete)   lr: 0.000837   
2022-05-12 22:11:03,800 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 75/959   Jobs: 1   Epoch: 1.56/20.0 (7.8% complete)   lr: 0.000835   
2022-05-12 22:11:24,409 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 76/959   Jobs: 1   Epoch: 1.58/20.0 (7.9% complete)   lr: 0.000833   
2022-05-12 22:11:44,842 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 77/959   Jobs: 1   Epoch: 1.60/20.0 (8.0% complete)   lr: 0.000831   
2022-05-12 22:12:05,681 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 78/959   Jobs: 1   Epoch: 1.62/20.0 (8.1% complete)   lr: 0.000829   
2022-05-12 22:12:26,355 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 79/959   Jobs: 1   Epoch: 1.65/20.0 (8.2% complete)   lr: 0.000827   
2022-05-12 22:12:47,024 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 80/959   Jobs: 1   Epoch: 1.67/20.0 (8.3% complete)   lr: 0.000825   
2022-05-12 22:13:12,798 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 81/959   Jobs: 1   Epoch: 1.69/20.0 (8.4% complete)   lr: 0.000823   
2022-05-12 22:13:33,400 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 82/959   Jobs: 1   Epoch: 1.71/20.0 (8.5% complete)   lr: 0.000821   
2022-05-12 22:13:54,179 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 83/959   Jobs: 1   Epoch: 1.73/20.0 (8.6% complete)   lr: 0.000819   
2022-05-12 22:14:14,530 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 84/959   Jobs: 1   Epoch: 1.75/20.0 (8.8% complete)   lr: 0.000818   
2022-05-12 22:14:35,526 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 85/959   Jobs: 1   Epoch: 1.77/20.0 (8.9% complete)   lr: 0.000816   
2022-05-12 22:14:56,695 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 86/959   Jobs: 1   Epoch: 1.79/20.0 (9.0% complete)   lr: 0.000814   
2022-05-12 22:15:17,387 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 87/959   Jobs: 1   Epoch: 1.81/20.0 (9.1% complete)   lr: 0.000812   
2022-05-12 22:15:38,198 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 88/959   Jobs: 1   Epoch: 1.83/20.0 (9.2% complete)   lr: 0.000810   
2022-05-12 22:15:58,896 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 89/959   Jobs: 1   Epoch: 1.85/20.0 (9.3% complete)   lr: 0.000808   
2022-05-12 22:16:19,594 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 90/959   Jobs: 1   Epoch: 1.88/20.0 (9.4% complete)   lr: 0.000806   
2022-05-12 22:16:40,171 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 91/959   Jobs: 1   Epoch: 1.90/20.0 (9.5% complete)   lr: 0.000804   
2022-05-12 22:17:00,646 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 92/959   Jobs: 1   Epoch: 1.92/20.0 (9.6% complete)   lr: 0.000802   
2022-05-12 22:17:21,306 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 93/959   Jobs: 1   Epoch: 1.94/20.0 (9.7% complete)   lr: 0.000800   
2022-05-12 22:17:42,093 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 94/959   Jobs: 1   Epoch: 1.96/20.0 (9.8% complete)   lr: 0.000798   
2022-05-12 22:18:02,595 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 95/959   Jobs: 1   Epoch: 1.98/20.0 (9.9% complete)   lr: 0.000796   
2022-05-12 22:18:23,277 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 96/959   Jobs: 1   Epoch: 2.00/20.0 (10.0% complete)   lr: 0.000794   
2022-05-12 22:18:43,815 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 97/959   Jobs: 1   Epoch: 2.02/20.0 (10.1% complete)   lr: 0.000792   
2022-05-12 22:19:04,340 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 98/959   Jobs: 1   Epoch: 2.04/20.0 (10.2% complete)   lr: 0.000791   
2022-05-12 22:19:25,054 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 99/959   Jobs: 1   Epoch: 2.06/20.0 (10.3% complete)   lr: 0.000789   
2022-05-12 22:19:45,718 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 100/959   Jobs: 1   Epoch: 2.08/20.0 (10.4% complete)   lr: 0.000787   
2022-05-12 22:20:12,210 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 101/959   Jobs: 1   Epoch: 2.10/20.0 (10.5% complete)   lr: 0.000785   
2022-05-12 22:20:33,074 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 102/959   Jobs: 1   Epoch: 2.12/20.0 (10.6% complete)   lr: 0.000783   
2022-05-12 22:20:53,819 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 103/959   Jobs: 1   Epoch: 2.15/20.0 (10.7% complete)   lr: 0.000781   
2022-05-12 22:21:14,806 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 104/959   Jobs: 1   Epoch: 2.17/20.0 (10.8% complete)   lr: 0.000779   
2022-05-12 22:21:35,385 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 105/959   Jobs: 1   Epoch: 2.19/20.0 (10.9% complete)   lr: 0.000777   
2022-05-12 22:21:55,918 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 106/959   Jobs: 1   Epoch: 2.21/20.0 (11.0% complete)   lr: 0.000776   
2022-05-12 22:22:16,675 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 107/959   Jobs: 1   Epoch: 2.23/20.0 (11.1% complete)   lr: 0.000774   
2022-05-12 22:22:37,244 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 108/959   Jobs: 1   Epoch: 2.25/20.0 (11.2% complete)   lr: 0.000772   
2022-05-12 22:22:57,666 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 109/959   Jobs: 1   Epoch: 2.27/20.0 (11.4% complete)   lr: 0.000770   
2022-05-12 22:23:18,464 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 110/959   Jobs: 1   Epoch: 2.29/20.0 (11.5% complete)   lr: 0.000768   
2022-05-12 22:23:39,229 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 111/959   Jobs: 1   Epoch: 2.31/20.0 (11.6% complete)   lr: 0.000766   
2022-05-12 22:23:59,816 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 112/959   Jobs: 1   Epoch: 2.33/20.0 (11.7% complete)   lr: 0.000764   
2022-05-12 22:24:20,175 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 113/959   Jobs: 1   Epoch: 2.35/20.0 (11.8% complete)   lr: 0.000763   
2022-05-12 22:24:40,799 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 114/959   Jobs: 1   Epoch: 2.38/20.0 (11.9% complete)   lr: 0.000761   
2022-05-12 22:25:01,584 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 115/959   Jobs: 1   Epoch: 2.40/20.0 (12.0% complete)   lr: 0.000759   
2022-05-12 22:25:22,011 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 116/959   Jobs: 1   Epoch: 2.42/20.0 (12.1% complete)   lr: 0.000757   
2022-05-12 22:25:42,774 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 117/959   Jobs: 1   Epoch: 2.44/20.0 (12.2% complete)   lr: 0.000755   
2022-05-13 03:16:25,873 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 938/959   Jobs: 1   Epoch: 19.54/20.0 (97.7% complete)   lr: 0.000105   
2022-05-13 03:16:46,805 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 939/959   Jobs: 1   Epoch: 19.56/20.0 (97.8% complete)   lr: 0.000105   
2022-05-13 03:17:07,547 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 940/959   Jobs: 1   Epoch: 19.58/20.0 (97.9% complete)   lr: 0.000105   
2022-05-13 03:17:34,074 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 941/959   Jobs: 1   Epoch: 19.60/20.0 (98.0% complete)   lr: 0.000105   
2022-05-13 03:17:55,188 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 942/959   Jobs: 1   Epoch: 19.62/20.0 (98.1% complete)   lr: 0.000104   
2022-05-13 03:18:16,078 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 943/959   Jobs: 1   Epoch: 19.65/20.0 (98.2% complete)   lr: 0.000104   
2022-05-13 03:18:36,844 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 944/959   Jobs: 1   Epoch: 19.67/20.0 (98.3% complete)   lr: 0.000104   
2022-05-13 03:18:57,689 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 945/959   Jobs: 1   Epoch: 19.69/20.0 (98.4% complete)   lr: 0.000104   
2022-05-13 03:19:18,545 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 946/959   Jobs: 1   Epoch: 19.71/20.0 (98.5% complete)   lr: 0.000103   
2022-05-13 03:19:39,511 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 947/959   Jobs: 1   Epoch: 19.73/20.0 (98.6% complete)   lr: 0.000103   
2022-05-13 03:20:00,370 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 948/959   Jobs: 1   Epoch: 19.75/20.0 (98.8% complete)   lr: 0.000103   
2022-05-13 03:20:21,540 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 949/959   Jobs: 1   Epoch: 19.77/20.0 (98.9% complete)   lr: 0.000103   
2022-05-13 03:20:42,841 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 950/959   Jobs: 1   Epoch: 19.79/20.0 (99.0% complete)   lr: 0.000102   
2022-05-13 03:21:03,773 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 951/959   Jobs: 1   Epoch: 19.81/20.0 (99.1% complete)   lr: 0.000102   
2022-05-13 03:21:25,128 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 952/959   Jobs: 1   Epoch: 19.83/20.0 (99.2% complete)   lr: 0.000102   
2022-05-13 03:21:46,070 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 953/959   Jobs: 1   Epoch: 19.85/20.0 (99.3% complete)   lr: 0.000102   
2022-05-13 03:22:06,945 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 954/959   Jobs: 1   Epoch: 19.88/20.0 (99.4% complete)   lr: 0.000101   
2022-05-13 03:22:27,827 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 955/959   Jobs: 1   Epoch: 19.90/20.0 (99.5% complete)   lr: 0.000101   
2022-05-13 03:22:48,833 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 956/959   Jobs: 1   Epoch: 19.92/20.0 (99.6% complete)   lr: 0.000101   
2022-05-13 03:23:09,761 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 957/959   Jobs: 1   Epoch: 19.94/20.0 (99.7% complete)   lr: 0.000101   
2022-05-13 03:23:30,733 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 958/959   Jobs: 1   Epoch: 19.96/20.0 (99.8% complete)   lr: 0.000100   
2022-05-13 03:23:51,775 [steps/nnet3/chain/train.py:529 - train - INFO ] Iter: 959/959   Jobs: 1   Epoch: 19.98/20.0 (99.9% complete)   lr: 0.000100   
2022-05-13 03:24:12,736 [steps/nnet3/chain/train.py:592 - train - INFO ] Doing final combination to produce final.mdl
2022-05-13 03:24:12,736 [/home/user/Documents/ML_Projects/vosk-api/kaldi/egs/wsj/training/steps/libs/nnet3/train/chain_objf/acoustic_model.py:571 - combine_models - INFO ] Combining {936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960} models.
2022-05-13 03:24:36,944 [steps/nnet3/chain/train.py:620 - train - INFO ] Cleaning up the experiment directory exp/chain/tdnn
steps/nnet2/remove_egs.sh: Finished deleting examples in exp/chain/tdnn/egs
exp/chain/tdnn: num-iters=960 nj=1..1 num-params=3.5M dim=40+40->2176 combine=-0.127->-0.119 (over 13) xent:train/valid[638,959]=(-2.30,-2.09/-2.34,-2.15) logprob:train/valid[638,959]=(-0.147,-0.126/-0.172,-0.152)
Converting 'data/local/lm/lm_tgsmall.arpa.gz' to FST
arpa2fst --disambig-symbol=#0 --read-symbol-table=data/lang_test/words.txt - data/lang_test/G.fst 
LOG (arpa2fst[5.5.1012~1-dd107]:Read():arpa-file-parser.cc:94) Reading \data\ section.
LOG (arpa2fst[5.5.1012~1-dd107]:Read():arpa-file-parser.cc:149) Reading \1-grams: section.
LOG (arpa2fst[5.5.1012~1-dd107]:Read():arpa-file-parser.cc:149) Reading \2-grams: section.
LOG (arpa2fst[5.5.1012~1-dd107]:Read():arpa-file-parser.cc:149) Reading \3-grams: section.
LOG (arpa2fst[5.5.1012~1-dd107]:RemoveRedundantStates():arpa-lm-compiler.cc:359) Reduced num-states from 1183418 to 249533
fstisstochastic data/lang_test/G.fst 
9.90926e-06 -0.29805
Succeeded in formatting LM: 'data/local/lm/lm_tgsmall.arpa.gz'
tree-info exp/chain/tdnn/tree 
tree-info exp/chain/tdnn/tree 
fstpushspecial 
fstminimizeencoded 
fstdeterminizestar --use-log=true 
fsttablecompose data/lang_test/L_disambig.fst data/lang_test/G.fst 
fstisstochastic data/lang_test/tmp/LG.fst 
-0.0577829 -0.0586653
[info]: LG not stochastic.
fstcomposecontext --context-size=2 --central-position=1 --read-disambig-syms=data/lang_test/phones/disambig.int --write-disambig-syms=data/lang_test/tmp/disambig_ilabels_2_1.int data/lang_test/tmp/ilabels_2_1.1396775 data/lang_test/tmp/LG.fst 
fstisstochastic data/lang_test/tmp/CLG_2_1.fst 
-0.0577829 -0.0586653
[info]: CLG not stochastic.
make-h-transducer --disambig-syms-out=exp/chain/tdnn/graph/disambig_tid.int --transition-scale=1.0 data/lang_test/tmp/ilabels_2_1 exp/chain/tdnn/tree exp/chain/tdnn/final.mdl 
fsttablecompose exp/chain/tdnn/graph/Ha.fst data/lang_test/tmp/CLG_2_1.fst 
fstminimizeencoded 
fstdeterminizestar --use-log=true 
fstrmsymbols exp/chain/tdnn/graph/disambig_tid.int 
fstrmepslocal 
fstisstochastic exp/chain/tdnn/graph/HCLGa.fst 
-0.0191171 -0.213417
HCLGa is not stochastic
add-self-loops --self-loop-scale=1.0 --reorder=true exp/chain/tdnn/final.mdl exp/chain/tdnn/graph/HCLGa.fst 
fstisstochastic exp/chain/tdnn/graph/HCLG.fst 
1.90465e-09 -0.146711
[info]: final HCLG is not stochastic.
arpa-to-const-arpa --bos-symbol=200004 --eos-symbol=200005 --unk-symbol=2 'gunzip -c data/local/lm/lm_tgmed.arpa.gz | utils/map_arpa_lm.pl data/lang_test_rescore/words.txt|' data/lang_test_rescore/G.carpa 
LOG (arpa-to-const-arpa[5.5.1012~1-dd107]:BuildConstArpaLm():const-arpa-lm.cc:1078) Reading gunzip -c data/local/lm/lm_tgmed.arpa.gz | utils/map_arpa_lm.pl data/lang_test_rescore/words.txt|
utils/map_arpa_lm.pl: Processing "\data\"
utils/map_arpa_lm.pl: Processing "\1-grams:\"
LOG (arpa-to-const-arpa[5.5.1012~1-dd107]:Read():arpa-file-parser.cc:94) Reading \data\ section.
LOG (arpa-to-const-arpa[5.5.1012~1-dd107]:Read():arpa-file-parser.cc:149) Reading \1-grams: section.
utils/map_arpa_lm.pl: Processing "\2-grams:\"
LOG (arpa-to-const-arpa[5.5.1012~1-dd107]:Read():arpa-file-parser.cc:149) Reading \2-grams: section.
utils/map_arpa_lm.pl: Processing "\3-grams:\"
LOG (arpa-to-const-arpa[5.5.1012~1-dd107]:Read():arpa-file-parser.cc:149) Reading \3-grams: section.
steps/make_mfcc.sh --cmd run.pl --nj 10 data/test exp/make_mfcc/test
steps/make_mfcc.sh: moving data/test/feats.scp to data/test/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/test
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
steps/make_mfcc.sh: Succeeded creating MFCC features for test
steps/compute_cmvn_stats.sh data/test exp/make_mfcc/test
Succeeded creating CMVN stats for test
steps/online/nnet2/extract_ivectors_online.sh --nj 10 data/test exp/chain/extractor exp/chain/ivectors_test
steps/online/nnet2/extract_ivectors_online.sh: extracting iVectors
steps/online/nnet2/extract_ivectors_online.sh: combining iVectors across jobs
steps/online/nnet2/extract_ivectors_online.sh: done extracting (online) iVectors to exp/chain/ivectors_test using the extractor in exp/chain/extractor.
steps/nnet3/decode.sh --cmd run.pl --num-threads 10 --nj 1 --beam 13.0 --max-active 7000 --lattice-beam 4.0 --online-ivector-dir exp/chain/ivectors_test --acwt 1.0 --post-decode-acwt 10.0 exp/chain/tdnn/graph data/test exp/chain/tdnn/decode_test
steps/nnet3/decode.sh: feature type is raw
steps/diagnostic/analyze_lats.sh --cmd run.pl --iter final exp/chain/tdnn/graph exp/chain/tdnn/decode_test
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 19.5202944648% of the time at utterance begin.  This may not be optimal.
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 75.1856225109% of the time at utterance end.  This may not be optimal.
steps/diagnostic/analyze_lats.sh: see stats in exp/chain/tdnn/decode_test/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(1,5,27) and mean=11.0
steps/diagnostic/analyze_lats.sh: see stats in exp/chain/tdnn/decode_test/log/analyze_lattice_depth_stats.log
score best paths
local/score.sh --cmd run.pl data/test exp/chain/tdnn/graph exp/chain/tdnn/decode_test
local/score.sh: scoring with word insertion penalty=0.0
score confidence and timing with sclite
Decoding done.
steps/lmrescore_const_arpa.sh data/lang_test data/lang_test_rescore data/test exp/chain/tdnn/decode_test exp/chain/tdnn/decode_test_rescore
local/score.sh --cmd run.pl data/test data/lang_test_rescore exp/chain/tdnn/decode_test_rescore
local/score.sh: scoring with word insertion penalty=0.0
grep: exp/tri3b/decode_tgsmall_test/wer_*: No such file or directory
%WER 36.23 [ 313658 / 865805, 57762 ins, 92680 del, 163216 sub ] exp/chain/tdnn/decode_test/wer_7_0.0
%WER 35.78 [ 309776 / 865805, 57306 ins, 92931 del, 159539 sub ] exp/chain/tdnn/decode_test_rescore/wer_7_0.0

mukeshbadgujar avatar May 13 '22 05:05 mukeshbadgujar

Please tell me which files i need to create model, please tell me steps.

Submit a patch to put the model with your initial understanding, we will correct.

nshmyrev avatar May 13 '22 17:05 nshmyrev

Please tell me which files i need to create model, please tell me steps.

Submit a patch to put the model with your initial understanding, we will correct.

Sorry sir, but i can't understand what you said,

mukeshbadgujar avatar May 18 '22 06:05 mukeshbadgujar