tensor2tensor
tensor2tensor copied to clipboard
Training a LSTM for ASR
I intend to try out an LSTM for speech recognition. Looking at the t2t code I noticed that there's a lstm_asr_v1
hparams-set which is should probably work with a lstm_seq2seq_attention
?
However, it's not clear to me how the data has to be presented. At the moment I am using just a regular ASR dataset, similar to what we'd use when training a transformer
model, but this does not seem to work right away with the LSTM model.
Is there any example for a ASR-LSTM that I can take a look at?
https://github.com/tensorflow/tensor2tensor/blob/a9da9635917814af890a31a060c5b29d31b2f906/tensor2tensor/models/lstm.py#L361
Update
I notice that using model lstm_seq2seq
with lstm_asr_v1
does seem to "work". The training is starting. I don't know yet whether something useful will emerge but it's not crashing at least.
Only the lstm_seq2seq_attention
does seem to have a problem with the provided input.
Hello , I was trying to write the same thing, may I know what did you use for checkpoint name, I mean, ckpt_name ? TIA!
@snpushpi I didn't touch that. I guess it's the usual checkpoint name ckpt-xxxxx
.