Liyong.Guo
Liyong.Guo
Hi, I wander is it appropriate to add CE regularizer(_**grad_xent**_) directly to _**grad**_ in chain model training? As implemented as: [grad.add_mat(chain_opts.xent_regularize, grad_xent)](https://github.com/jzlianglu/pykaldi2/blob/5e988e5968aa9a5867f8179e6c53ea715ac46bdc/ops/ops.py#L129) . In kaldi's chain model recipe, e.g. [aishell...
## Updated on Nov. 30, 2022: k2.mwer is already merged by https://github.com/k2-fsa/k2/pull/1103 This pr will focus on tracking arc_map_token during decoding. ## Original: An implementation of MWER, equation 2 of...
Following results are tested with a deliberately selected input and decoding configuration. ## with allow_partial=False (the original code): There is no final arc(i.e. arc.label == -1). ``` 0 1 0...
The logprobs of final frame does not contribute lattice generation. This pr is trying to fix this issue. ## With original code. if num_frames = 1 if num_frames = 10...
This pr is about skipping blank frames based on fast_beam_search. It's similar to https://github.com/k2-fsa/icefall/pull/472 except that one is based on greedy_search_batch. When prob_blank > gamma_blank, only epsilon self-loop (the arc...
1. A better model trained by (ctc + label_smooth_loss #219) is released 2. 4-gram rescore is integrated with refering to #215 Latest result with feat_batch_norm | Wer% on test_clean |...
As metioned in #217,currently bpe training with ctcLoss and labelSmoothLoss in snowfall obtain higher wer than that of espnet. decoding algorithm| training tool| encoder + k2 ctc decode+no rescore |...
Wer results of this pr (by loaded models from espnet model zoo): ``` test-clean 2.43% test-other 5.79% ```   This pr implements following procedure with models from espnet model...
Fixes #132 2021-04-23 use AM model trained with[ full librispeech data](https://github.com/k2-fsa/snowfall/issues/146) rescore LM | epoch | num_paths | token ppl | word ppl | test-clean | test-other -- | --...
A faster version of https://github.com/k2-fsa/k2/pull/1089 Also fix issue of results different with allow_partial=True/False with [yfyeung](https://github.com/yfyeung)'s blank skipping model. Results with [yfyeung](https://github.com/yfyeung)'s model epoch-30-avg-14: fast_beam_search | allow_partial | test-clean | test-other...