vall-e icon indicating copy to clipboard operation
vall-e copied to clipboard

How to use multiple-GPU in training?

Open yiwei0730 opened this issue 3 years ago • 2 comments

I saw the solve in close issue python -m torch.distributed.launch --nproc_per_node 2 -m vall_e.train yaml=config/your_data/ar.yml use this command can use double gpus but the speed didn't fast than the one gpu

yiwei0730 avatar Feb 16 '23 06:02 yiwei0730

I know this was completed but I found that while the speed seems the same or a little slower, it converges a lot faster.

Here's what I'm testing on:

dev-clean of LibriTTS kaggle 2x T4 GPUs

coddiw0mple avatar Feb 26 '23 17:02 coddiw0mple

@coddiw0mple
thanks for your reply. Can i ask for your setting and the result? I use the Libritts-360 hr and setting the param quarter for training. But the result is not good enough

yiwei0730 avatar Feb 27 '23 10:02 yiwei0730