pytorch_memlab icon indicating copy to clipboard operation
pytorch_memlab copied to clipboard

Empty CUDA cache before profiling

Open willprice opened this issue 4 years ago • 2 comments

Why:

  • If you have previously run some code you wish to profile with different parameters, causing it to allocate more memory, then this will lead to inaccurate profiling results which correspond to the set of parameters that cause the maximum memory usage due to torch's CUDA memory allocator.

This change addresses the need by:

  • Emptying the CUDA memory cache before profiling resolves this issue.

willprice avatar Apr 03 '20 08:04 willprice

Can you give an example of the failed case. I was wondering if the problem exists in the decorator style line profiler.

Stonesjtu avatar Apr 03 '20 11:04 Stonesjtu

Huh... I'm struggling to reproduce it now in an independent notebook.

willprice avatar Apr 03 '20 12:04 willprice

Should we close this?

nyngwang avatar Feb 09 '23 00:02 nyngwang

Closing as empty_cache already called in enable method.

https://github.com/Stonesjtu/pytorch_memlab/blob/43e4d09b1f710bdc278e8deaa8d28ba9c3a2f62b/pytorch_memlab/line_profiler/line_profiler.py#L87-L95

Stonesjtu avatar Feb 09 '23 02:02 Stonesjtu