Speech-Backbones icon indicating copy to clipboard operation
Speech-Backbones copied to clipboard

Multi-GPU training and expected epochs

Open bieltura opened this issue 3 years ago • 5 comments

Hi,

First of all, thanks for the nice paper and release code. I am testing your model for a different dataset and two questions come up:

  1. Which is the estimated number of epochs to train the model? We have expierenced some degradation when the model is overtrained (overfitting?) the data.
  2. Is there a way to train the model in a multi-gpu setup? We have more GPUs available, however the code seems to just work on the first available GPU given by the CUDA_VISIBLE_DEVICES argument.

Thanks!

bieltura avatar Jan 17 '22 17:01 bieltura

@bieltura Hi! Thank you for your interest in Grad-TTS work.

  1. Paper's Grad-TTS model was trained for 1.7mln iterations, which corresponds to approximately 2300 epochs. Usually, we trained our models up to 2000 epochs with mini-batch size 16 and 2sec speech fragments (out_size argument in params.py).
  2. Sorry, our code is not adopted for multi-GPU training, but you can easily change train.py or train_multi_speaker.py according to the best PyTorch multi-GPU training practices.

ivanvovk avatar Jan 17 '22 21:01 ivanvovk

Hi @ivanvovk,

Thanks for the answering of the quesitons. Here's an update that my be helpful for future development:

DataParallel can not be implemented in the current setup, as compute_loss method is not within the forward pass of the model. The solution is to adapt forward to compute the loss function and generate another method for inference (in a single GPU).

Apart from that, I have found that using multiple GPUs, code breaks when, for a batch, the length of an audio sample is less than the 2sec speech fragments. The solution is to force the shape to be always this 2 sec (in frames).

y_cut_mask = sequence_mask(y_cut_lengths).unsqueeze(1).to(y_mask) to y_cut_mask = sequence_mask(torch.LongTensor([out_size] * len(y_cut_lengths))).unsqueeze(1).to(y_mask)

I still find that 2300 epochs in a single GPU is a very large amount of training. Did you follow any procedure to check when the modeled converged to the best checkpoint?

Thanks

bieltura avatar Jan 26 '22 12:01 bieltura

@bieltura it is usually preferred to use DistributedDataParallel instead of DataParallel. It is faster, and if I am not mistaken, there are no such problems with forward pass at DDP setting.

What about checking the convergence of the model, we just checked the quality at 10 iterations, and when it became good, we stopped training. Nothing special.

ivanvovk avatar Jan 27 '22 10:01 ivanvovk

Thanks! As a side note, we have been using the Energy metric (predicted-target difference) to check whether samples are "good enough" for evaluation. As you mentioned in your paper, diffusion loss is not informative in terms of model convergence, as it has to update to all possible steps from 0 to T (and this is picked up randomly). Here are some plots that may be useful to you as well. Feel free to close the issue when you read it :) And again, thanks for everything.

image

image

bieltura avatar Feb 08 '22 09:02 bieltura

For my case, I found Accelerate very useful: https://github.com/huggingface/accelerate with just several lines of code.

iooops avatar Apr 21 '23 06:04 iooops