stanza icon indicating copy to clipboard operation
stanza copied to clipboard

Using another GPU instead of the default

Open alimgh opened this issue 3 years ago • 0 comments

Problem In the current version of Stanza, there is the use_gpu parameter which determines if we want to use the CUDA device instead of the CPU. Unfortunately, this option doesn't support multi-GPU environments, and we can't load the model on the specified device. For example, I have two GPUs and want to load some instances of the Stanza model on these two and distribute my inputs among these instances, so I can gain the most utilization out of the system and speed up my process.

Solution When I checked the source code of the Stanza package, I found out that the Stanza uses the torch models in its core and use_gpu is simply a condition parameter alongside the torch.cuda.is_available() to determine if the Pipeline should use the default CUDA device. It is possible to change the use_gpu parameter to something like the device with the default value of cuda (instead of True for use_gpu). Then check if this device parameter was started with the cuda (for cases like device="cuda:1") and cuda was available, then use that specified device and load models on different devices.

Another Solution It is also possible to add a parameter like device_idx to select the desired device and keep use_gpu as it is now.

alimgh avatar Aug 02 '22 08:08 alimgh