DiffSinger icon indicating copy to clipboard operation
DiffSinger copied to clipboard

Multi-GPU training & batchsize problem

Open X-Drunker opened this issue 1 year ago • 0 comments

Hi, I really appreciate your work and now I'm going to train the model on this pipeline. My issues are as follows:

  1. I note that you have adapted the code to multi-GPU versions with DDP, but I cant figure out how to train with multi-GPU. Maybe I should set self.use_ddp = True here ?
  2. In the paper you mentioned that you trained DiffSinger on 1 NVIDIA V100 GPU with 48 batch size. However, I can't find any customizable variable related to batch size. Is it necessary to set batch size to match the number of GPU, if I want to train with multi-GPU? Any suggestion is welcome.

X-Drunker avatar Nov 14 '23 14:11 X-Drunker