snowfall
snowfall copied to clipboard
Moved to https://github.com/k2-fsa/icefall
Guys, I ran the current setup on the full librispeech data for 3 epochs and this issue is mostly just an FYI so you can see what I got. I...
Fixes #132 2021-04-23 use AM model trained with[ full librispeech data](https://github.com/k2-fsa/snowfall/issues/146) rescore LM | epoch | num_paths | token ppl | word ppl | test-clean | test-other -- | --...
Got this via email from @zhu-han ... ``` I got the first reasonable result: 2021-04-21 02:28:00,831 INFO [common.py:365] [test-clean] %WER 13.39% [7041 / 52576, 887 ins, 965 del, 5189 sub...
I have come up with a plan for how to deal with phonetic/graphemic context. For now I'll use the label "phonetic", but this is without loss of generality. There's no...
After having a look at nsys output, I think we are largely limited by latency of sequential operations in IntersectDevice, IntersectDense, GetForwardScores and GetBackwardScores (and of memory transfer when we...
When I try to run more than one trainings (with a single job) on the same machine, I get this: ``` Traceback (most recent call last): File "mmi_bigram_train.py", line 475,...
Guys, I just remembered a trick that we used to use in Kaldi to help models converge early on, and I tried it on a setup that was not converging...
Guys, I realized that there is some very low hanging fruit that could easily make our WERs state of the art, which is neural LM rescoring. An advantage of our...
See below (using the latest master) ``` 2021-03-29 07:34:23,835 INFO [common.py:270] ================================================================================ 2021-03-29 07:34:23,837 INFO [ctc_att_transformer_train.py:440] epoch 0, learning rate 0 Traceback (most recent call last): File "./ctc_att_transformer_train.py", line 508,...
I have been doing most of the acoustic model tuning on the librispeech setup, but the WERs don't seem to move below around 6.8% whatever I do. I had a...