SLM-Lab
SLM-Lab copied to clipboard
How to run the code on multiple GPUs?
Hi, kengz,
I meet a problem on how to run on multiple GPUs? In the initial of class ConvNet
in conv.py, the code assigned device as follows:
self.to(self.device)
but how to extent to multi GPUs here( in initial of class ConvNet
) , or for an instantiation of class ConvNet
.
When I try to use torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)
to assign to multi GPUs, there is a problem that some (public) methods or variables definition in class ConvNet
will lose after conv_mode=torch.nn.DataParallel(conv_mode, device_ids={1,2,3,4})
.
hey @Jzhou0 SLM Lab wasn't written with distributed training across GPUs in mind. However I think you could do so with:
- write your own extension of the conv net class. So, something like this https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html and have it consume a new key passed in from
net_spec
for GPU assignment as you need. - specify your custom net class in your net_spec with
"type": "YourConvNet",
with the net spec values.
And the algorithm should just be able to pick it up. Depending on which algorithm the loss computation is going to use data from different devices so you'd need to make sure the correct device transfer happens on your net class implementation. But again certain things might break when you're training something so big across device - so definitely watch out for that. Let me know how it goes!