Comprehensive-Transformer-TTS icon indicating copy to clipboard operation
Comprehensive-Transformer-TTS copied to clipboard

New TTS Model request

Open rishikksh20 opened this issue 4 years ago • 19 comments

Recently two papers regarding Transformer TTS pops up and I think both are suitable for this repo:

  1. DelightfulTTS: The Microsoft Speech Synthesis System for Blizzard Challenge 2021
  2. Emphasis control for parallel neural TTS

I think both are easy to implement and well suited for this repo.

rishikksh20 avatar Nov 19 '21 20:11 rishikksh20

Hi @rishikksh20, thanks for the requests! I can see that they fit well with this project. I will look into it and hope that I can merge them with this repo :)

keonlee9420 avatar Nov 22 '21 10:11 keonlee9420

Hi @keonlee9420 , DelightfulTTS is similar to Phone Level Mixture Density Network but here instead of using complicated GMM based model author directly used latent representation for Prosody Predictor and Prosody encoder. Phoneme level prosody encoder and Utterance level encoder are similar to this. I think they simply uses Global Style Token(GST) module as Utterance level encoder.

rishikksh20 avatar Nov 22 '21 17:11 rishikksh20

DelightfulTTS learn Phoneme level prosody implicitly whereas Emphasis control for parallel neural TTS learn same explicitly by extracting features from this repo.

rishikksh20 avatar Nov 22 '21 17:11 rishikksh20

I think DelightfulTTS is all in one solution, it uses non-autoregressive architecture with conformer blocks and both Utterance level and Phoneme level predictor as well.

rishikksh20 avatar Nov 22 '21 17:11 rishikksh20

Thank you for the summary. The DelightfulTTS model seems worth a try as you depicted. I will try it and share through the update soon!

keonlee9420 avatar Nov 23 '21 08:11 keonlee9420

@keonlee9420 Hi, are you able to train DelightfullTTS successfully ?

rishikksh20 avatar Dec 10 '21 15:12 rishikksh20

Yes, but it shows overfitting issue. I guess this issue originated from the limited capacity of the prosody predictor since I can confirm that the prosody embedding extracted from prosody extractor can actually improve the expressiveness including the validation loss.

keonlee9420 avatar Dec 13 '21 07:12 keonlee9420

Have you train predictor and extractor simultaneously or train extractor for 100k steps first then pause it and then start predictor training in teacher forcing method like mentioned in AdaSpeech paper ?

rishikksh20 avatar Dec 14 '21 06:12 rishikksh20

Because in my case I do some modification in architecture, I used same extractors as mentioned in DelightfullTTS 's papers but I am not using any predictor for utterance level because I want to use it similarly as GST-Tacotron by passing external reference mel, and for phoneme level predictor I used similar predictor architecture as in original Adaspeech's which is similar to duration and pitch predictor. And I train Phoneme level extractor for 100k then stop it and then start predictor training. But while training this, till 2000 steps with 32 batch size model loss works perfectly but after around 2200 steps loss start increasing and not converge and output is just noise. But when I passed detached hidden state to Phoneme level extractor then it train perfectly and even latent variable also working, I am able to change emotion using latent variable of Phoneme -level predictor.

rishikksh20 avatar Dec 14 '21 06:12 rishikksh20

ah, thanks for sharing. I trained jointly without any detach or schedule from the first step. So what you mean is

  1. training only the prosody extractor (not predictor) until 100k
  2. start training the prosody predictor but with a detached prosody embedding from prosody extractor (still the prosody extractor is also on its training) right? Or in 2, do you mean no gradient flows back to even the prosody extractor too?

keonlee9420 avatar Dec 14 '21 08:12 keonlee9420

I suggest 1

rishikksh20 avatar Dec 14 '21 09:12 rishikksh20

@keonlee9420 In your experience which perform better normal Transformer encoder or Conformer when you have only 20 hours of speech data?

rishikksh20 avatar Dec 16 '21 05:12 rishikksh20

As per this article Microsoft TTS api built on DelightfullTTS.

rishikksh20 avatar Dec 21 '21 05:12 rishikksh20

can you share your code

I suggest 1 @rishikksh20

hdmjdp avatar Jan 30 '22 04:01 hdmjdp

detached hidden state

@rishikksh20 Does this refer to text encoder output?

hdmjdp avatar Jan 30 '22 06:01 hdmjdp

detached hidden state

@rishikksh20 Does this refer to text encoder output?

yes

rishikksh20 avatar Jan 30 '22 08:01 rishikksh20

@rishikksh20 After 100k,, does the prams of prodsody extractor update or just frozen?

hdmjdp avatar Jan 30 '22 12:01 hdmjdp

Is there any confirmation on the quality of the Transformer encoder or Conformer, I found that the conformer in DelightfulTTS is quite different from ASR a little bit.

v-nhandt21 avatar Aug 15 '22 09:08 v-nhandt21

@v-nhandt21 yes conformer in TTS is modified version of ASR one.

rishikksh20 avatar Aug 15 '22 14:08 rishikksh20