tacotron icon indicating copy to clipboard operation
tacotron copied to clipboard

Generated wave were empty

Open frozen-finger opened this issue 7 years ago • 8 comments

i have trained this for over 23k steps, but when using synthesis.py, the result seems empty. And i found that the generated mag to be normal. Can anyone tell me how to solve this problem?

frozen-finger avatar Apr 26 '19 10:04 frozen-finger

Sorry this problem may seem stupid. But when i change is_training to True, there wasn't just silence. Although i still can not understand what it said. So, was it about batch normalization? @Kyubyong

frozen-finger avatar May 04 '19 10:05 frozen-finger

You're going to need to train for at least 150,000 steps I'd imagine. See the pretrained models.

nevercast avatar May 23 '19 00:05 nevercast

You're going to need to train for at least 150,000 steps I'd imagine. See the pretrained models.

Thank u for your advice. Can i know how many steps had you trained and its performance

frozen-finger avatar May 23 '19 03:05 frozen-finger

I met this problem also... But even if I turn the is_training to True, the audio synthesized in synthesize mode is also far worse than in mode train.

xiawenxing avatar Jun 01 '20 15:06 xiawenxing

@frozen-finger How did you solve this problem? Can you please explain?

giridhar-pamisetty avatar Jun 10 '20 06:06 giridhar-pamisetty

The difference between the quality of audio generated during training and inference is because your model hasn't learned "attention". Make sure to look at the attention plots like the one here. If your model is learning attention, you should start to see a more or less diagonal line. This is also the reason why @nevercast suggested you train for many more steps. Most of my training sessions start producing decent attention plots around 60k steps.

If your dataset has empty spaces at the start or end of the audio files, trimming those would greatly help with this problem.

TheNarrator avatar Jun 10 '20 09:06 TheNarrator

@TheNarrator Thanks for the response.

@nevercast @frozen-finger @candlewill @Kyubyong The attention plots are looking diagonal after 50k steps. It seems the model has learned attention. But may be more steps needed I think.

There seems to be problem with predicted Mel(mel_hat) in synthesis.py, because I checked by providing the original Mel extracted from wavfile to mel_hat instead of predicting from the model, this is giving perfect result and it is sounding clean.

So, I thought that mel_hat prediction is going wrong. Will it improve after more steps?

giridhar-pamisetty avatar Jun 11 '20 05:06 giridhar-pamisetty

@TheNarrator Thanks for the response.

@nevercast @frozen-finger @candlewill @Kyubyong The attention plots are looking diagonal after 50k steps. It seems the model has learned attention. But may be more steps needed I think.

There seems to be problem with predicted Mel(mel_hat) in synthesis.py, because I checked by providing the original Mel extracted from wavfile to mel_hat instead of predicting from the model, this is giving perfect result and it is sounding clean.

So, I thought that mel_hat prediction is going wrong. Will it improve after more steps?

I met the same problem as you, mel_gt&mag_gt is correct but mel_hat&mag_hat prediction goes wrong. And the audio synthesized is empty. Have you fix it?

xiawenxing avatar Jun 12 '20 09:06 xiawenxing