Tatiana Likhomanenko
Tatiana Likhomanenko
Ok, probably this 3-4% WER is just variation of training and that decoder parameters will be not exactly the same for you model. At least improvement from 52% Viterbi to...
> > Hi, > > This is about memory manager, just info for debugging, you can skip this if you try to understand what is happening with training. > >...
WER 50% is expected with very small model we have in tutorial. Check `recipes/models/sota` for our best results. About converting epoch into iterations - just compute how many rows you...
@mironnn they have different format, you need to convert the sota models, right now we support it only for TDS models. Please follow instructions on TDS converter in wav2letter/tools.
cc @vineelpratap @avidov
@vineelpratap can you point the correct link to the commit you mentioned which probably fixes the problem?
As far as I know, lm is word based, so at least to proper use Lexicon free you need to use wp LM (in this case you try to apply...
Please see example here https://github.com/facebookresearch/wav2letter/tree/master/recipes/sota/2019/lm.
Could you try just put as lexicon file the one with all tokens and the same spelling for them? for example if your tokens set is {ab, cd, ef} the...
Better is to construct here https://github.com/facebookresearch/wav2letter/blob/v0.2/inference/inference/decoder/Decoder.cpp#L61 wordMap using the tokens set. Or if you provide the lexicon file as I proposed above be sure to set trie to none, and...