LPCNet icon indicating copy to clipboard operation
LPCNet copied to clipboard

Integrating Tacotron and LPCNet: Training tacotron with .f32 features

Open rpratesh opened this issue 5 years ago • 31 comments

In the ReadMe, it's mentioned

Convert the data generated at the last step which has .f32 extension to what could be loaded with numpy. I merge it to the Tacotron feeder here and here with the following code.

> mel_target = np.fromfile(os.path.join(self._mel_dir, meta[0]), dtype='float32')
> mel_target = np.resize(mel_target, (-1, self._hparams.num_mels))

But, meta[0] will have speech-audio-xxxx.npy files while self._mel_dir would have speech-mel-xxxx.npy files. So, the above code snippet is trying to search for speech (npy or f32) files inside mel_dir. Is there any thing wrong in the above code snippet.

One more doubt: Where should I copy the .f32 file generated in previous step, in Mels or in wavs or in Linear folder so that we can train Tacotron with these features generated.

Also, In this case, should I use

python train.py --model='Tacotron-2'

which trains entire tacotron+wavenet

or use

python train.py --model='Tacotron'

which trains only Tacotron.

Thanks

rpratesh avatar May 17 '19 05:05 rpratesh

  1. about meta[0]. If you use tacotron2 preprocessing, you will get three folders(audio, melsprectrum and linear spectrum) and a txt file used to provide the info. Because I do not need audio file in tacotron2 training, so I make a soft link of audio folder to the f32 folder. Meanwhile, I can training tacotron2 with mels conventionally. Of course, I should modify the first column(meta[0]) to the real name of f32 file. You can modify it to your actual path of f32 according to your situation.
  2. only train tacotron. you have another vocoder LPCNet instead of wavenet.

MlWoo avatar May 17 '19 06:05 MlWoo

So while training tacatron2 , i should replace/softlink the f32 to audio folder of training_data( after preprocessing) and train.txt first column(meta[0]) should be actual name of f32 files right? if above is taken care below lines should be added to feeder.py.

mel_target = np.fromfile(os.path.join(self._audio_dir, meta[0]), dtype='float32') mel_target = np.resize(mel_target, (-1, self._hparams.num_mels))

Basically what npy should be loaded in feeder audio or melspectrum .?

alokprasad avatar May 17 '19 09:05 alokprasad

of course, you can do that as long as your path is the f32.file .

MlWoo avatar May 17 '19 10:05 MlWoo

Hi, the feature extracted with feature_extract.sh script is saved as .f32 file and then it's used to train Tacotron2 . But Normally Tacotron2 was used to predict mel spectrogram. Here, T2 + LPCNet ,is the predict target of T2 is changed or just replace mel spectrogram with .f32 feature?

superhg2012 avatar May 24 '19 08:05 superhg2012

@superhg2012 replace the f32 created to audio folder.. check this diff based on Mlwoo changes. https://github.com/alokprasad/LPCTron/blob/master/Tacotron-2/Tacotron2-lpcnet_changes.diff

alokprasad avatar May 24 '19 08:05 alokprasad

@alokprasad thanks a lot !! #

superhg2012 avatar May 24 '19 08:05 superhg2012

@alokprasad can you post your samples?

superhg2012 avatar Jun 13 '19 09:06 superhg2012

@superhg2012 Please find the recording,,( They are not good) I think we should retrain lpcnet with f32 generated from tactron2 https://vocaroo.com/i/s1Dx9nbKFeuY https://vocaroo.com/i/s1VRBWayVzrD

alokprasad avatar Jun 13 '19 10:06 alokprasad

@alokprasad I can not reach the link you posted, please refer to [#1], (https://github.com/MlWoo/LPCNet/issues/1) posted audio sample, it seems that the author did not use GTA training mode.

superhg2012 avatar Jun 13 '19 11:06 superhg2012

@alokprasad You have a lot work to do because you should calculate the length of audio according to the number of frames and add the tail to the audio. We did not use GTA mode because the job is trivial. LPCNet is sensitive to pitch params. I think gta mode will result in one deviation of baseline to another if t2 is not trained well.

MlWoo avatar Jun 13 '19 14:06 MlWoo

@MlWoo do you mean that each Audio file should be of same length or it should in integral multiple of frames?

alokprasad avatar Jun 14 '19 13:06 alokprasad

@alokprasad more work. LPCnet will cut off the silence of audio in default, you should modify LPCnet code to cooperate with gta result of T2.

MlWoo avatar Jun 15 '19 01:06 MlWoo

@MlWoo Can you point to code in LPCNet , where the modification need to be done.

alokprasad avatar Jul 11 '19 07:07 alokprasad

@MlWoo i saw that https://github.com/mozilla/LPCNet/commit/554b6df65eca11f572e4a7d3b266f54a37a4a17f there is silence removal here, and this is needed only to during training of LPCnet . Should i remove this code and Train LPCNet.

alokprasad avatar Jul 11 '19 08:07 alokprasad

@MlWoo Can we add this in Tacotron training for silence removal https://github.com/gooofy/zamia-tts/commit/66bd10d4c34f215eaf37fba7e712047291002ace

alokprasad avatar Jul 15 '19 05:07 alokprasad

@alokprasad Tacotron training with silence removal is maybe a good idea when training English. It is bad idea while training Chinese(mandarin) because the very short silence is benefiticial to the prosody. I am not very sure about that is good to English cauz' I am not a native English speaker. Removing the long silence at the beginning and end of an audio is necessary when training Tacotron2.

MlWoo avatar Aug 08 '19 03:08 MlWoo

@superhg2012 replace the f32 created to audio folder.. check this diff based on Mlwoo changes. https://github.com/alokprasad/LPCTron/blob/master/Tacotron-2/Tacotron2-lpcnet_changes.diff

I see you save audio (by preprocess) to meta[0], so you use audio as mel_target to train Tacotron2?

lmingde avatar Aug 20 '19 12:08 lmingde

Hi, the feature extracted with feature_extract.sh script is saved as .f32 file and then it's used to train Tacotron2 . But Normally Tacotron2 was used to predict mel spectrogram. Here, T2 + LPCNet ,is the predict target of T2 is changed or just replace mel spectrogram with .f32 feature?

Is the way to train T2? I don't understand well.

lmingde avatar Aug 20 '19 12:08 lmingde

Hi, the feature extracted with feature_extract.sh script is saved as .f32 file and then it's used to train Tacotron2 . But Normally Tacotron2 was used to predict mel spectrogram. Here, T2 + LPCNet ,is the predict target of T2 is changed or just replace mel spectrogram with .f32 feature?

Is the way to train T2? I don't understand well.

I see, we use f32 to train T2 instead mel feature

lmingde avatar Aug 26 '19 08:08 lmingde

@MlWoo i saw that mozilla@554b6df there is silence removal here, and this is needed only to during training of LPCnet . Should i remove this code and Train LPCNet.

I have confuse about the trm

@MlWoo i saw that mozilla@554b6df there is silence removal here, and this is needed only to during training of LPCnet . Should i remove this code and Train LPCNet.

@alokprasad I use your LPCTron code, andthe ouput vocie is bad, Is the Hparmas effect? And by the way, Do we need create mel and linear spectrum in Tacotron preprocess, I don't find we use mel or linear train when we use Tacotron+LPCNet.

lmingde avatar Aug 26 '19 08:08 lmingde

@superhg2012 Is the audio quality of TTS + LPCNet good? how did you make it?

byuns9334 avatar Sep 21 '19 02:09 byuns9334

@byuns9334 I don't get good quality with T2 + LPCNet(20dim). But, I get better quality with T1 and LPCNet(55 dim).

@lmingde I put the dumped f32 files into the audio dir,when train T2, the f32 files in audio dir will be feeded as mel_target for training.

superhg2012 avatar Oct 10 '19 09:10 superhg2012

@superhg2012 i guess T1 and T2 are same except Vocoder part , which anyways we are using LPCNET. Can you share the changes for T1 with LPCNET.

alokprasad avatar Oct 14 '19 05:10 alokprasad

@alokprasad about LPCNet model training with 55 dim features is better than 20 dim. About T1, no special changes, just train with 55 dim features.

superhg2012 avatar Oct 14 '19 08:10 superhg2012

nb_features is already 55 , so u mean to say no changes in lpcnet just train lpcnet. https://github.com/mozilla/LPCNet/blob/master/src/train_lpcnet.py

For T1 Instead of num_mels = 20 you mean num_mels = 55?

alokprasad avatar Oct 14 '19 09:10 alokprasad

yes, just try it. when test synthesis... make without taco=1 flag make clean & make test_lpcnet

superhg2012 avatar Oct 14 '19 09:10 superhg2012

@MlWoo Hi, may I know that, in the training stage when feeding a batch of samples to Tacotron, what padding values are used to ensure the f32 features (whatever 20 or 55 dims) having the same length? I noticed that -0.1 is used in alokprasad's LPCTron implementation.

wangfn avatar Oct 18 '19 19:10 wangfn

@wangfn I have forgotten it. no worries, just mask the padding value when calculating the loss.

MlWoo avatar Oct 19 '19 01:10 MlWoo

@MlWoo Thanks a lot, indeed masking the padding values is the solution.

wangfn avatar Oct 19 '19 09:10 wangfn

@byuns9334 I don't get good quality with T2 + LPCNet(20dim). But, I get better quality with T1 and LPCNet(55 dim).

@lmingde I put the dumped f32 files into the audio dir,when train T2, the f32 files in audio dir will be feeded as mel_target for training.

@superhg2012 what changes is required for LPCNET for 20 to 55 dim? i thin it uses 55 but only 20 are needed.Any changes in Tactron2 training if we change dims in LPCNET

alokprasad avatar Apr 17 '20 03:04 alokprasad