pytorch-kaldi icon indicating copy to clipboard operation
pytorch-kaldi copied to clipboard

How to train/decode on reverberant speech?

Open kevinmchu opened this issue 4 years ago • 1 comments

I'd like to train a model on reverberant speech using the alignments generated from the corresponding anechoic data. Currently, I'm doing something similar to TIMIT_joint_training_liGRU_fbank.cfg, where I am using the reverberant TIMIT recipe to extract the features and the anechoic recipe for lab_folder and lab_graph. I noticed that decode_dnn.sh uses the lab_graph to generate the lattice rather than the graph constructed from the reverberant acoustic model.

What is the easiest way to specify using the anechoic alignments and reverberant graph?

kevinmchu avatar Jan 08 '21 18:01 kevinmchu

I just wanted to follow up an ask if anyone has suggestions.

kevinmchu avatar Jan 25 '21 14:01 kevinmchu