Yunchao He
Yunchao He
I trained the model with full data for about 130 epoches. The best loss I got was about 0.14. The loss figures is as follows:  Here is the synthesized...
@Spotlight0xff I kept all the hyper parameters unchanged.
@minsangkim142 It takes about five days with two Tesla M40 24GB GPUs (just one for computation).
New synthesized speech samples here: [http://pan.baidu.com/s/1miohdVy](http://pan.baidu.com/s/1miohdVy ) It was trained on a small data. Just `Revelation` from Bible was used. Epoch 2000. Best loss 0.53. 
Coming soon!
@hmubarak I have the same problem here with yours. All of the generated voice is like this:  Loss 
@kyoguan The loss curve is pretty good. How many training data did you use?
@basuam The Python I used is Python 3.6.0 with Anaconda 4.3.1 (64-bit), and GPU version TensorFlow (1.1) is used. When training, the two GPUs are used, but just one for...
To my understanding, maybe this is the reason: If multiple GPUs are not explicitly declared how to allocate, TensorFlow would choose the first GPU for computation as default, but use...
@Kyubyong In the current `train.py` code, the training is completely on CPU (see [here](https://github.com/Kyubyong/tacotron/blob/master/train.py#L29) ). I commented this line to allow use one GPU. Then, I tried to compare the...