pytorch_backend icon indicating copy to clipboard operation
pytorch_backend copied to clipboard

Setting max_gpu_fraction as in Tensorflow backend

Open FabianSchuetze opened this issue 2 years ago • 2 comments

This PR permits setting the a max_gpu_fraction for the pytorch backend.

Pytorch allows setting the max gpu fraction through the CUDACachingAllocator . The user of the pytorch_backend can set the memory fraction in the same fashion as in the tensorflow backend. The memory fraction applies to all models.

I am a bit uncertain about how to handle the case with multiple GPUs and would appreciate feedback about desired behavior in this case.

FabianSchuetze avatar May 22 '23 06:05 FabianSchuetze

any update on this?

whateverforever avatar Sep 21 '23 10:09 whateverforever

I haven't heard from the owners of the repo yet. Is someone available for a code review?

FabianSchuetze avatar Sep 22 '23 06:09 FabianSchuetze