Rajeev
Rajeev
> Hi @tlikhomanenko , > > Thank you very much for your help > > This is the result of my decoder step: > `[Decode lists/test-clean.lst (2620 samples) in 126.635s...
@tlikhomanenko It is mentioned in SOTA documentation for transformer: The model is trained with total batch size 128 for approximatively 320 epochs with Adadelta. There is a warmup stage: SpecAugment...
@qzfnihao Did you generated this lexicon file librispeech-train+dev-unigram-10000-nbest10.lexicon using the code provided in repo or you just downloaded this from repo? I have used the same file and instead of...
> What do you mean here? Lexicon should be in word-pieces sequence because the tokens set of AMs are word-pieces, not letters. I have mentioned issue here. #757
@tlikhomanenko Most of the people don't have 16 GPU's to try out these models on the exact configurations provided in sota/2019. I am also trying to reproduce transformer+CTC results on...
Sorry!! There were 2 utils file. I choosed the wrong one. When u are removing matlab dependency for MRCG Extraction and other TODO's. ????????
Okay !! :+1: Does py branch support spectogram feature(didn't found in training script) instead of MRCG ???
Thanks!! Eagerly waiting for that.
More eagerly waiting now as i find matlab is paid. :):)
Any Updates or plans to include Squeezeformer in the ESPNet?