Yunchao He

Results 32 comments of Yunchao He

I trained the model with full data for about 130 epoches. The best loss I got was about 0.14. The loss figures is as follows: ![image](https://cloud.githubusercontent.com/assets/4916563/26668252/6cca683c-46db-11e7-8723-7e203cd85711.png) Here is the synthesized...

@Spotlight0xff I kept all the hyper parameters unchanged.

@minsangkim142 It takes about five days with two Tesla M40 24GB GPUs (just one for computation).

New synthesized speech samples here: [http://pan.baidu.com/s/1miohdVy](http://pan.baidu.com/s/1miohdVy ) It was trained on a small data. Just `Revelation` from Bible was used. Epoch 2000. Best loss 0.53. ![image](https://cloud.githubusercontent.com/assets/4916563/26771037/2908977a-49ee-11e7-9861-39a943e06814.png)

Coming soon!

@hmubarak I have the same problem here with yours. All of the generated voice is like this: ![image](https://user-images.githubusercontent.com/4916563/26957675-2ecf871e-4cf9-11e7-884d-9fae63b313ff.png) Loss ![image](https://user-images.githubusercontent.com/4916563/26957712-706fdb4c-4cf9-11e7-944f-28c412e82282.png)

@kyoguan The loss curve is pretty good. How many training data did you use?

@basuam The Python I used is Python 3.6.0 with Anaconda 4.3.1 (64-bit), and GPU version TensorFlow (1.1) is used. When training, the two GPUs are used, but just one for...

To my understanding, maybe this is the reason: If multiple GPUs are not explicitly declared how to allocate, TensorFlow would choose the first GPU for computation as default, but use...

@Kyubyong In the current `train.py` code, the training is completely on CPU (see [here](https://github.com/Kyubyong/tacotron/blob/master/train.py#L29) ). I commented this line to allow use one GPU. Then, I tried to compare the...