Parallel-Tacotron2
Parallel-Tacotron2 copied to clipboard
PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
Hi. The work is amazing. I notice that you mentioned there were some bugs in soft-DTW in "Updates". Have you already solved these problems?
Thanks for sharing the nice model implementation.  When I start training, the following warning appears, do you also get the same message? I think it's a fairseq installation problem....
``` File "/data1/hjh/pycharm_projects/tts/parallel-tacotron2_try/model/parallel_tacotron2.py", line 68, in forward self.learned_upsampling(durations, V, src_lens, src_masks, max_src_len) File "/home/huangjiahong.dracu/miniconda2/envs/parallel_tc2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/data1/hjh/pycharm_projects/tts/parallel-tacotron2_try/model/modules.py", line 335, in forward mel_mask =...
Hi, Thanks for your excellent work! Could you possibly share your audio samples, pretrained models and loss curves with me? Thanks so much for your help!
Hey, I've found that in your implementation of S-DTW backward, E - matrices are not used, instead you are using G - matrices and their entries are ignoring scaling factors...
Just wondering if we can train with LJS on this implementation thanks!
I cloned the code, prepared data according to README, and just updated: 1. ljspeech data path in config/LJSpeech/train.yaml 2. unzip generator_LJSpeech.pth.tar.zip to get generator_LJSpeech.pth.tar and the code can run! But,...
Hello, Has anybody been able to train with softdtw loss. It doesn't converge at all. I think there is a problem with the implementation but I could't spot it. When...
Can someone share the weights file link? I couldn't synthesize it or use its inference. If I am wrong please tell me the correct method of using it. Thanks