pytorch
pytorch copied to clipboard
Uvm backend
Add CUDAMallocManagedAllocator Backend
With the new CUDAAllocator class, we have created a new
CUDAMallocManagedAllocator, which will handle allocator requests from both
cpu and cuda device types when the backend is enabled
You can enable the backend using PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocManaged
And view inside PyTorch using torch.cuda.get_allocator_backend()
This allocator is initially rudimentary as the performance implications of a
managed allocator are still being worked out. However, the goal is to be able to
swap out the backend when running without any code change required.