Torch-Scope
Torch-Scope copied to clipboard
feature request : multi-gpu output from pw.auto_device() for data parallel models.
In case of using a single gpu, the auto_device finds the best available gpu and gives the right index.
Suppose in the case of data parallel, where multiple k-gpus are required (say k=2 gpus), can the auto_device provide best available k-gpu indices so that nn.DataParallel(model) is also supported?
regards, sobalgi.
Thanks for asking! For now we haven’t tested on the dataparalle case. And my guess is that it would cause some bug. Welcome to submit PR if you wanna implement this feature :-)
Ya. I was about to implement. I'll submit PR once I'm done. Thanks.