vall-e
vall-e copied to clipboard
How to use multiple-GPU in training?
I saw the solve in close issue python -m torch.distributed.launch --nproc_per_node 2 -m vall_e.train yaml=config/your_data/ar.yml use this command can use double gpus but the speed didn't fast than the one gpu
I know this was completed but I found that while the speed seems the same or a little slower, it converges a lot faster.
Here's what I'm testing on:
dev-clean of LibriTTS kaggle 2x T4 GPUs
@coddiw0mple
thanks for your reply. Can i ask for your setting and the result?
I use the Libritts-360 hr and setting the param quarter for training. But the result is not good enough