TorchSharp
TorchSharp copied to clipboard
Force GPU memory limit
As mentioned in this question, PyTorch now supports limiting GPU memory usage, which helps better management and planning on it.
I'm searching for the C++ code that implements it. Haven't found it yet.
@NiklasGustafsson I found method implementation of set_per_process_memory_fraction is exists on torch/csrc/cuda/Module.cpp::_cuda_setMemoryFraction I don't know about how pytorch implementing native method to python, but this might be help :D
Maybe this method is doing that: c10::cuda::CUDACachingAllocator::setMemoryFraction(fraction, device);
CUDACachingAllocator
Which header file is that declared in? I don't find it in torch/cuda.h
CUDACachingAllocator
Which header file is that declared in? I don't find it in torch/cuda.h
c10/cuda/CUDACachingAllocator.cpp here in 927 line
c10/cuda/CUDACachingAllocator.h this is header
@NiklasGustafsson I'm trying to implement this method but, this method is only available on CUDA-backend api. Is there any place to locate cuda-only-apis?
No, because the interop layer is backend-independent, so it doesn't link to anything that isn't available in both.