pytorch_backend
pytorch_backend copied to clipboard
Setting max_gpu_fraction as in Tensorflow backend
This PR permits setting the a max_gpu_fraction for the pytorch backend.
Pytorch allows setting the max gpu fraction through the CUDACachingAllocator . The user of the pytorch_backend can set the memory fraction in the same fashion as in the tensorflow backend. The memory fraction applies to all models.
I am a bit uncertain about how to handle the case with multiple GPUs and would appreciate feedback about desired behavior in this case.
any update on this?
I haven't heard from the owners of the repo yet. Is someone available for a code review?