LPCNet
LPCNet copied to clipboard
Is pretrained model of Tacotron2 + LPCNet is avalilible?
Is pretrained model of Tacotron2 + LPCNet is avalilible?
We have the pretrained model of the two on mandarin dataset. But I think it is not illegal without the permission of my company to release them personally. It is not hard to train the model with your materials following the steps in readme. I will follow the steps to rerun the training and synthesising procedures to make sure it is right.
Hi @MlWoo, i have trained Tacotron2 about 21000 steps on a female mandarin dataset and connected to LPCNet. At 21000, the error is about 0.33 and decoder is already aligned with encoder. The output wav has really bad quality, ie. large portion of the sentence is silence and cannot tell even gender from voicing part. How many steps have you trained tacotron to achieve good sound?
I have pointed that the quality of vocoder is sensitive to the estimation of pitch parameters. maybe you could achieve it with 210000 steps. we have different params to train the tacotron2. and I think it has no much meaningful.info.for you. our loss is less than 0.1.
@MlWoo I waited for another 10k steps, the loss stays above 0.3. Any advice on taco2 params? Thanks in advance.
Hi, @MlWoo Did you train T2 with 16k mandarin dataset?
@superhg2012 yes.
@sheepHavingPurpleLeaf which Tacotron repo did you use? any better results?
@MlWoo audio processing parameters is not used when training T2 with .f32 feature files. I tried different hparams, but can only achieve 0.2 loss, did you adjust the T2 network params? thanks in advance!
@superhg2012 The loss is gained with the teacher forcing mode.
@MlWoo thanks!! constant and scheduled, which mode is preferenced?
@superhg2012 constant mode.
many thanks !!
I trained T2 for 130k steps and the lowest loss value is 0.13, and the synthsized audio is still not good as expected. some post processing needed?
@MlWoo I think LPCNet is ok, the cause is pitch parameters predicted from Tacotron2, could you give some suggestions?
@superhg2012 Can you share your hparams? you are using pinyin to train or phoneme?
@estherxue Could you post your samples?
@sheepHavingPurpleLeaf I used pinyin to train Tacotron2 and parameters is common.
@superhg2012 I have got similar result with yours. Did you train LPCNet with the English dataset provided in Mozilla's repo or you used your mandarin dataset?
@sheepHavingPurpleLeaf I used same mandarin dataset for LPCNet and Tacotron2. The sound quality is almost same while loss is around 0.13 ~ 0.17.
Hi, here are my samples trained with Tacotron 2 + LPCNet. tacotron2+lpcnet.zip
@estherxue hi, the examples sounds good, I have several questions. 1 . you are using pinyin to train or phoneme? 2. you are using same dataset to train both t2 and lpcnet? 3. how many steps takes to train t2 part? and last loss? 4. you are training in GTA mode?
thanks in advance!!
@superhg2012 Our team (Xue is my collegue) does not use any other trick to train the tacotron2.
- pinyin
- the same dataset
- 280k if I remember correctly. and loss is about 0.1. Maybe the lr scheduling is not same as the t2 repo because T2 repo is updated recently.
- No GTA. if you want to use gta mode, there is a lot tricky work (like round audio to the frames)to be done.
@MlWoo get it, thanks for kind reply!!
how long does it take to synthesize on GPU as well as CPU
If my memory serves me correctly, synthesis on GPU is very slow. For synthesis on CPU, the speed can reach above 3 times real time.
ajaysg [email protected] 于2019年10月6日周日 下午7:47写道:
how long does it take to synthesize on GPU as well as CPU
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/MlWoo/LPCNet/issues/1?email_source=notifications&email_token=AEFARKARC2YBF4NFLXQGCHDQNHF4JA5CNFSM4HNMWYTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAOIEIQ#issuecomment-538739234, or mute the thread https://github.com/notifications/unsubscribe-auth/AEFARKCSY4XN5YIFSS666ZTQNHF4JANCNFSM4HNMWYTA .