Tatiana Likhomanenko

Results 242 comments of Tatiana Likhomanenko

Hey, sorry for delay! Here are my comments: > Please let me know if there are any steps I'm missing for preparing LM or improve the LM preparation to prepare...

Great to hear your progress! > I got the same issues (either first name or last name missing) with beamsize = 1000 + beamsizetoken=60 as well. Is there any way...

About decoding - you can see that it produce some similar by pronunciation word in letter prediction and because you are using lexicon-based and you don't have "bas" and "antonet"...

`--uselexicon=false` will use lexicon free decoding, so you need to have token level LM. For your current use case please first try `make the if condition always false to I...

Had a quick look at your code. Looks fine for me. @xuqiantong any comment on decoder? What the model are you using? If transformer, then I don't expect really that...

@AlexandderGorodetski Do you have this word in your lexicon file "bold360"? If yes, what the sequence of tokens you set for it? In practice it is better to preprocess numbers...

There are 3 cases - word based decoding (with lexicon only) - tkn based decoding (with lexicon) - tkn based decoding (without lexicon) All three cases are different and will...

Hi, This is about memory manager, just info for debugging, you can skip this if you try to understand what is happening with training. ``` Memory Manager Stats MemoryManager type:...

> Also I observer the model .bin gets updated for every epoch. Is there a way I can save model .bin file and other files for every epoch so during...

What is your Viterbi WER in the training log on dev-clean? @vineelpratap any idea on disagreement with tutorial numbers?