LPCNet icon indicating copy to clipboard operation
LPCNet copied to clipboard

Is pretrained model of Tacotron2 + LPCNet is avalilible?

Open mrgloom opened this issue 5 years ago • 24 comments

Is pretrained model of Tacotron2 + LPCNet is avalilible?

mrgloom avatar May 16 '19 13:05 mrgloom

We have the pretrained model of the two on mandarin dataset. But I think it is not illegal without the permission of my company to release them personally. It is not hard to train the model with your materials following the steps in readme. I will follow the steps to rerun the training and synthesising procedures to make sure it is right.

MlWoo avatar May 16 '19 14:05 MlWoo

Hi @MlWoo, i have trained Tacotron2 about 21000 steps on a female mandarin dataset and connected to LPCNet. At 21000, the error is about 0.33 and decoder is already aligned with encoder. The output wav has really bad quality, ie. large portion of the sentence is silence and cannot tell even gender from voicing part. How many steps have you trained tacotron to achieve good sound?

sheepHavingPurpleLeaf avatar May 25 '19 06:05 sheepHavingPurpleLeaf

I have pointed that the quality of vocoder is sensitive to the estimation of pitch parameters. maybe you could achieve it with 210000 steps. we have different params to train the tacotron2. and I think it has no much meaningful.info.for you. our loss is less than 0.1.

MlWoo avatar May 25 '19 09:05 MlWoo

@MlWoo I waited for another 10k steps, the loss stays above 0.3. Any advice on taco2 params? Thanks in advance.

sheepHavingPurpleLeaf avatar May 26 '19 14:05 sheepHavingPurpleLeaf

Hi, @MlWoo Did you train T2 with 16k mandarin dataset?

superhg2012 avatar May 30 '19 01:05 superhg2012

@superhg2012 yes.

MlWoo avatar May 31 '19 06:05 MlWoo

@sheepHavingPurpleLeaf which Tacotron repo did you use? any better results?

superhg2012 avatar Jun 10 '19 02:06 superhg2012

@MlWoo audio processing parameters is not used when training T2 with .f32 feature files. I tried different hparams, but can only achieve 0.2 loss, did you adjust the T2 network params? thanks in advance!

superhg2012 avatar Jun 10 '19 02:06 superhg2012

@superhg2012 The loss is gained with the teacher forcing mode.

MlWoo avatar Jun 10 '19 03:06 MlWoo

@MlWoo thanks!! constant and scheduled, which mode is preferenced?

superhg2012 avatar Jun 10 '19 03:06 superhg2012

@superhg2012 constant mode.

MlWoo avatar Jun 10 '19 06:06 MlWoo

many thanks !!

superhg2012 avatar Jun 10 '19 06:06 superhg2012

I trained T2 for 130k steps and the lowest loss value is 0.13, and the synthsized audio is still not good as expected. some post processing needed?

demo.zip

@MlWoo I think LPCNet is ok, the cause is pitch parameters predicted from Tacotron2, could you give some suggestions?

superhg2012 avatar Jun 13 '19 06:06 superhg2012

@superhg2012 Can you share your hparams? you are using pinyin to train or phoneme?

sheepHavingPurpleLeaf avatar Jun 13 '19 06:06 sheepHavingPurpleLeaf

@estherxue Could you post your samples?

MlWoo avatar Jun 13 '19 07:06 MlWoo

@sheepHavingPurpleLeaf I used pinyin to train Tacotron2 and parameters is common.

superhg2012 avatar Jun 13 '19 07:06 superhg2012

@superhg2012 I have got similar result with yours. Did you train LPCNet with the English dataset provided in Mozilla's repo or you used your mandarin dataset?

sheepHavingPurpleLeaf avatar Jun 13 '19 07:06 sheepHavingPurpleLeaf

@sheepHavingPurpleLeaf I used same mandarin dataset for LPCNet and Tacotron2. The sound quality is almost same while loss is around 0.13 ~ 0.17.

superhg2012 avatar Jun 13 '19 07:06 superhg2012

Hi, here are my samples trained with Tacotron 2 + LPCNet. tacotron2+lpcnet.zip

estherxue avatar Jun 13 '19 07:06 estherxue

@estherxue hi, the examples sounds good, I have several questions. 1 . you are using pinyin to train or phoneme? 2. you are using same dataset to train both t2 and lpcnet? 3. how many steps takes to train t2 part? and last loss? 4. you are training in GTA mode?

thanks in advance!!

superhg2012 avatar Jun 13 '19 07:06 superhg2012

@superhg2012 Our team (Xue is my collegue) does not use any other trick to train the tacotron2.

  1. pinyin
  2. the same dataset
  3. 280k if I remember correctly. and loss is about 0.1. Maybe the lr scheduling is not same as the t2 repo because T2 repo is updated recently.
  4. No GTA. if you want to use gta mode, there is a lot tricky work (like round audio to the frames)to be done.

MlWoo avatar Jun 13 '19 14:06 MlWoo

@MlWoo get it, thanks for kind reply!!

superhg2012 avatar Jun 14 '19 03:06 superhg2012

how long does it take to synthesize on GPU as well as CPU

ajaysg-zz avatar Oct 06 '19 11:10 ajaysg-zz

If my memory serves me correctly, synthesis on GPU is very slow. For synthesis on CPU, the speed can reach above 3 times real time.

ajaysg [email protected] 于2019年10月6日周日 下午7:47写道:

how long does it take to synthesize on GPU as well as CPU

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/MlWoo/LPCNet/issues/1?email_source=notifications&email_token=AEFARKARC2YBF4NFLXQGCHDQNHF4JA5CNFSM4HNMWYTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAOIEIQ#issuecomment-538739234, or mute the thread https://github.com/notifications/unsubscribe-auth/AEFARKCSY4XN5YIFSS666ZTQNHF4JANCNFSM4HNMWYTA .

estherxue avatar Oct 12 '19 13:10 estherxue