wav2letter
wav2letter copied to clipboard
Facebook AI Research's Automatic Speech Recognition Toolkit
Hello, I am trying to run Libri-1K transformer training. After 20 epoch train-WER=327 (please see log below). Is something wrong with my running? I0530 04:41:50.446601 981 Train.cpp:573] Epoch 20 started!...
I try to reproduce the streaming_convnets in librispeech data on a 4-gpu machine, I find it hard to train all the data with libri-light, so I just use 1k hours...
When I run online inference, it seems like once a part of a sentence is printed to the stdout, the part of the sentence never changes regardless of the following...
### Question Hi, could someone help me why my train is stucked when using multi-gpu? #### Additional Context   Stuck, no training iteration done (I am using 1 iteration...
Hi, I'm using the fork command on am_resnet_ctc_librispeech_dev_other.bin to adapt the model to my own dataset, and i got the following errors which says `Loss has NaN values.` ``` I0723...
Hello, I was training a new model. But after around 100000 iterations (maybe still in the first epoch), I got the fallowing error message. *** Aborted at 1590963336 (unix time)...
I have some questions regarding language model. 1. I want to build a character level LM. Previously I built a word level LM using KENLM and for that I created...
### Question Are there any other wav2letter models published besides the one in AWS S3? http://dl.fbaipublicfiles.com/wav2letter/inference/examples/model Thanks
HI, as we know , mwer can reduce wer by 8% more on ctc and s2s. Does it have any play to realize it?
Hello, While training a TDS seq-to-seq model, (CUDA) OOM errors (ArrayFire exceptions) are raised constantly. Training configuration is: - 5k hours of training data - DGX-1v (8 x V100, each...