cutde
cutde copied to clipboard
Multiple GPU usage and init
This is rather a question than an issue.
I have multiple GPUs at hand and I would want to run multiple processes each on a different GPU.
For now always all the processes automatically try to run only on the first GPU, which is then raising pycuda._driver.LogicError: cuMemAlloc failed: initialization error
Do you know how one would need to do this? I found your CUDAContextWrapper class, but I did not find where it is being used. I guess there one could pass different contexts from the different GPUs... ?
I found a post on how to build a GPUThread here: https://shephexd.github.io/development/2017/02/19/pycuda.html That hints at that. However, I do not fully grasp how I would have to integrate that to cutde ...
Thanks in advance for any suggestion!