WaveGrad icon indicating copy to clipboard operation
WaveGrad copied to clipboard

Audio quality improvements

Open janvainer opened this issue 3 years ago • 6 comments

Hi, awesome contribution for TTS community :) I am wondering, did you manage to train a model that would have higher audio quality than the pretrained checkpoint provided with this repo? The audio samples seem to have lower quality than the ones presented in the paper. Any ideas what might be missing?

I am now training the model from scratch and the audio samples are very noisy now (approx 12 hours on 2 GPUs, batch size 128). It is getting better, but I am curious in some upper bound on the quality with the provided source code.

janvainer avatar Mar 28 '21 14:03 janvainer

@janvainer Hey, thanks, man. Yeah, the samples are of a bit lower quality than ones presented in demo page of the paper. However, authors used their personal proprietary dataset for training, where the female had much lower pitch than Linda (it is always hard to train on LJ). And I noticed that the less iterations you make, model reconstructs the less accurate higher frequencies. But I also think there might be some issues in diffusion calculations. I can suggest you to look towards lucidrains code and reuse forward and backward DDPM calculations with improved cosine schedules (maybe this can help): https://github.com/lucidrains/denoising-diffusion-pytorch. His repo follows the paper https://arxiv.org/pdf/2102.09672.pdf. I am going to return to this WaveGrad repo and gain its best quality, finally, once all my other projects are finished. But I think it can be delayed till summer. Also, you can check Mozilla's TTS library, I remember some guys from there interested in WaveGrad and they even added WaveGrad to their codebase: https://github.com/mozilla/TTS. Hope, it can help you.

ivanvovk avatar Mar 28 '21 14:03 ivanvovk

Thanks for swift repsonse :) I will check the diffusion calculations. I also tried the mozzila version, but the quality of the synthesized audio seemed a bit lower to me, at least for the WaveGrad vocoder combined with tacotron 2. There is this weird high freq noise.

On a side note, I am getting increasing L1 test batch loss, while the l1 test spec batch loss is going down. Did you experience the same behavior?

image

janvainer avatar Mar 29 '21 07:03 janvainer

@janvainer Yes, actually, I remember in my experiments that loss was not representative at all, spectral was more informative. I think such behavior is okay, don't pay attention to this.

ivanvovk avatar Mar 29 '21 16:03 ivanvovk

Ok thanks! :)

janvainer avatar Apr 04 '21 14:04 janvainer

Hello, @janvainer ! I just train and the audio samples are very noisy now (approx 12 hours 25K epochs on single GPU, batch size 96,). Could you show me your train result? And when will the samples be good? Thanks!

yijingshihenxiule avatar Jun 14 '22 02:06 yijingshihenxiule

Hi, unfortunately I do not have the results with me anymore. But I remember training on 4 GPUs for several days.

janvainer avatar Jun 14 '22 07:06 janvainer