Pytorch-Memory-Utils
Pytorch-Memory-Utils copied to clipboard
memory info doesn't match
The previous display total used memory plus the middle tensor memory does not equal the following total used memory
the same
I define a tensor with size [6, 12,2048,2048], the fp32 memory consumes 1207.9 M, howerver line 13 shows Total Used Memory:2511.9 Mb
Hello. I just answer the question in my PR. It is because the cuda kernel take some space.
If you are interested, you can see the revised code here:
https://github.com/hzhwcmhf/Pytorch-Memory-Utils/blob/master/README.md#faqs
Why Total Tensor Used Memory is much smaller than Total Allocated Memory?
* Total Allocated Memory is the peak of the memory usage. When you delete some tensors, PyTorch will not release the space to the device, until you call gpu_tracker.clear_cache() like our sciprts.
* The cuda kernel will take some space. See https://github.com/pytorch/pytorch/issues/12873