Torch-Scope icon indicating copy to clipboard operation
Torch-Scope copied to clipboard

feature request : multi-gpu output from pw.auto_device() for data parallel models.

Open sobalgi opened this issue 6 years ago • 2 comments

In case of using a single gpu, the auto_device finds the best available gpu and gives the right index.

Suppose in the case of data parallel, where multiple k-gpus are required (say k=2 gpus), can the auto_device provide best available k-gpu indices so that nn.DataParallel(model) is also supported?

regards, sobalgi.

sobalgi avatar Mar 16 '19 07:03 sobalgi

Thanks for asking! For now we haven’t tested on the dataparalle case. And my guess is that it would cause some bug. Welcome to submit PR if you wanna implement this feature :-)

LiyuanLucasLiu avatar Mar 16 '19 16:03 LiyuanLucasLiu

Ya. I was about to implement. I'll submit PR once I'm done. Thanks.

sobalgi avatar Mar 16 '19 20:03 sobalgi