Kyubyong Park

Results 79 comments of Kyubyong Park

From the TensorBoard. I guess 200 epochs, or 1000 global steps are far from enough. Note that the paper said they trained for 2 million global steps. And I think...

As I mentioned in the README file, I haven't achieved any promising results yet. I'll share them if there're any updates.

Do you mean you trained with one or two samples? Would you share your results or pretrained file?

It's becaue I use dynamic padding in training and evaluation. I found TensorFlow can't infer the timesteps of the batch so I just precalculated and fixed them explicitly. You can...

Ok. First, pay attention the line 196 in `train.py`. x, y, z= tf.train.batch([x, y, z], shapes=[(None,), (None, hp.n_mels*hp.r), (None, (1+hp.n_fft//2)*hp.r)], num_threads=32, batch_size=hp.batch_size, capacity=hp.batch_size*32, **dynamic_pad=True**) The last option means the size...

candlewill's explanation is exact. I added `train_multi_gpus.py` for using multiple gpus. @basuam @candlewill Would you run and check the file? In my environment (3 * gtx 1080), the time for...

Did you run `train_multi_gpus.py`?

You changed the value of num_gpus in the hyperparams.py, did you?

One possibility is the batch size. If you have 4 gpus, you have to multiply the hp.batch_size by 4 for a fair comparison. If you see the code, mini-batch samples...

@candlewill Oh, and I removed the line of `tf.device('/cpu:0')`. I forgot to remove it. Thanks.