dsohum

Results 5 comments of dsohum

Thanks for your time. I used the openly available librispeech 3gram LM and followed `make_fst` to construct the FST but the lm-scores are -50 (default value) mostly. I was not...

I am confused about the lm integration. Does it not involve modifying the BeamSearchDecoder? Since the modified log prob scores from the LMCellWrpper get normalized after softmax in the BeamSearchDecoder....

I was trying to reproduce the experiment for WSJ dataset. But I am only getting 40% WER by using this repo _as-is_. Am I missing something? I am not using...

Thanks for replying! I ran the code for 16 epoch as specified by the repo (the val loss seemed to saturate). Is there any pretraining etc required? _I did changed...

Validation loss seems to converge to ~0.25 after 41 epochs. Getting 31% WER. Should I be running it for more epochs? Should I change batch-size for training? ![deepsphinx-42-epoch-as-is-2018](https://user-images.githubusercontent.com/20748628/39491084-adbe1954-4da8-11e8-9f2c-59b88e7adca7.png) Thanks!