mxnet
mxnet copied to clipboard
Repeated evaluations of a large convolution allocate more and more memory
This script repeatedly runs a large convolution operator on the same input and checks memory allocation:
import mxnet as mx
import os, psutil
def get_memory_usage():
return psutil.Process(os.getpid()).memory_info().rss / 1e+6
sym = mx.sym.Convolution(
mx.sym.Variable('in'),
mx.sym.Variable('w'),
mx.sym.Variable('b'),
kernel=(20, 20),
num_filter=1
)
inputs = {
'in': mx.nd.ones([1, 3, 500, 500]),
'w': mx.nd.ones([1, 3, 20, 20]),
'b': mx.nd.ones([1])
}
cached_op = mx.ndarray.CachedOp(sym)
print('Initial memory: ' + str(get_memory_usage()))
for i in range(10):
cached_op(*inputs.values(), default_ctx=mx.cpu())
mx.ndarray.waitall()
print(get_memory_usage())
This is what I'm getting:
Initial memory: 188.06784
1306.4192
2416.996352
3527.53664
4638.076928
4638.076928
4638.076928
4638.076928
4638.076928
4638.076928
4638.076928
Memory allocation climbs greatly in the first few evaluations, then stops. I would naively expect it to stop increasing after the first evaluation. Why does this happen? Is there a way to control this behaviour?