cuda-python icon indicating copy to clipboard operation
cuda-python copied to clipboard

limit the amount of memory a process can allocate on a single CUDA device

Open DiTo97 opened this issue 2 years ago • 0 comments

Hi all,

As the title suggests, is there a way to limit the total amount of memory that a process can allocate on a single CUDA device?

Perhaps, even by using pyNVML?

This issue is related to the following discussions:

  • https://unix.stackexchange.com/questions/630412/limit-gpu-resource-to-a-particular-process
  • https://github.com/ExpectationMax/simple_gpu_scheduler/issues/7

What are the cons of sharing the resources of a single CUDA device among different processes competing for access?

DiTo97 avatar Jul 08 '23 12:07 DiTo97