memcnn
memcnn copied to clipboard
Does your memcnn support muti-gpu training ?
Yes, it should support multi-gpu training. Please, let me know if you run into any issues.
Can your method balance the GPU memory usage ?
I find this problem.
I assume that what you are showing is the output of the nvidia-smi
command. The ReversibleBlock of MemCNN by default frees memory after assigning the memory. So the memory might actually be available, but what you see is that it is still reserved by PyTorch. Hence, the actual available memory is not reflected by the nvidia-smi command. To see the actual available memory you can use the torch.cuda.memory_allocated()
command per device.
See also: https://pytorch.org/docs/stable/notes/cuda.html#cuda-memory-management