DiffSinger
DiffSinger copied to clipboard
Multi-GPU training & batchsize problem
Hi, I really appreciate your work and now I'm going to train the model on this pipeline. My issues are as follows:
- I note that you have adapted the code to multi-GPU versions with DDP, but I cant figure out how to train with multi-GPU. Maybe I should set
self.use_ddp = True
here ? - In the paper you mentioned that you trained DiffSinger on 1 NVIDIA V100 GPU with 48 batch size. However, I can't find any customizable variable related to batch size. Is it necessary to set batch size to match the number of GPU, if I want to train with multi-GPU? Any suggestion is welcome.