haystack
haystack copied to clipboard
Make it possible to choose which GPU to use for training of DPR model, until multi-GPU training gets implemented
Can you please make it possible in the method train(...)
of the DensePassageRetriever
class to specify at least the which CUDA-device we want to use. Currently it is hard-coded to cuda:0
, which I assume was imagined to be changed as soon as multi-GPU training gets implemented. Nevertheless, until multi-GPU gets implemented, it seems logical to me to make it possible for users to choose which GPU they want to use if they have multiple of them.
I am working on a platform where I share GPU power with my colleagues, and when the GPU device 'cuda:0' gets occupied by somebody else, I either have to wait for the GPU to become free again, or to change the hyperparameters so that they consume less GPU-power (which of course leads to worse results).
Hey @nhadziosma1! You should be able to choose the device you want to use when initializing the DensePassageRetriever
, for example like this:
from haystack.nodes import DensePassageRetriever
dpr = DensePassageRetriever(devices=["cuda:1"])
This would allow you to use cuda:1
instead of the default cuda:0
.
Let me know if that works for you :)