Comprehensive-Transformer-TTS icon indicating copy to clipboard operation
Comprehensive-Transformer-TTS copied to clipboard

Prosody Loss

Open inconnu11 opened this issue 3 years ago • 7 comments

Hi, I am adding your MDN prosody modeling code segment to my tacotron but I encountered several problems about the code segment about prosody modeling. First, the prosody loss is added into the total loss only after the prosody_loss_enable_steps but in the training steps before the prosody_loss_enable_steps the prosody representation is already added with the text encoding. Does it means in the training steps before the prosody_loss_enable_steps, the prosody representation is optimized without the prosody loss? Second, in the training steps, the backward gradient of training prosody predictor should be acted like "stop gradient" but it seems little relevant code. Thanks!

eeObXqHdtF

inconnu11 avatar Oct 19 '22 02:10 inconnu11

Hi @inconnu11 , thanks for your attention.

My intention was to prevent the prosody encoder learning meaningless representations at the first few training steps. But you can remove prosody_loss_enable_steps (by setting it as 1 for example) if you don't care. Otherwise, there should be no gain from backprop through prosody encoder even it's still added to the text hidden.

keonlee9420 avatar Oct 22 '22 06:10 keonlee9420

Hi, I got it and thanks for the reply. But when I run the code with the default setting with LJSpeech corpus except toggle the type of prosody modelings to 'du2021', the prosody loss at prosody_loss_enable_steps(10w by default) is nan. image

inconnu11 avatar Oct 26 '22 02:10 inconnu11

Hmm, it's weird. If you have room for that, could you please do some sanity checks on your side? For example, removing some part of the code to make it simpler until the nan loss disappear would be one. It will definitely be helpful for others interested in this issue.

keonlee9420 avatar Oct 30 '22 14:10 keonlee9420

I'd like to do so. But it takes too long to train it. I have to train the model for 7days with one gpu T4. Are there any parts of the code can speed up the training process?

inconnu11 avatar Oct 31 '22 03:10 inconnu11

Hi,

I'm the author of this paper. My code for calculating the MDN loss is here with a small numerical stability trick: MDN_loglike

Does that help?

cpdu avatar Oct 31 '22 13:10 cpdu

Hi, I change the mdn loss calculation from fig1 to fig2 . But it doesn't seem to work.

original MDN loss: 20221102-215537

newer MDN loss:

20221102-215548

inconnu11 avatar Nov 02 '22 13:11 inconnu11

The MDN loss (i.e. negative log-likelihood) can be negative value. However, in your log, it is almost 0 before becoming nan. I guess maybe you can check whether you calculate the likelihood correctly.

cpdu avatar Nov 03 '22 12:11 cpdu