3d-photo-inpainting icon indicating copy to clipboard operation
3d-photo-inpainting copied to clipboard

Memory leak

Open Shelkey opened this issue 4 years ago • 5 comments

When running the application from either a VM with Ubuntu or Google's colab with 6GB and 13.5GB ram respectively, the RAM usage continually increases until the application crashes. Is there a limiter or does it require every bit of RAM?

Shelkey avatar Jul 22 '20 23:07 Shelkey

While the app usually doesn't crash in Colab, I should note that it is more prone to crashing when working with larger files or when the: fps: num_frames: longer_side_len:

arguments are increased

Shelkey avatar Jul 25 '20 21:07 Shelkey

I think, the problem is with the way the depth matrix are being calculated in print(f"Writing depth ply (and basically doing everything) at {time.time()}") rt_info = write_ply(image, depth, sample['int_mtx'], mesh_fi, config, rgb_model, depth_edge_model, depth_edge_model, depth_feat_model)

This function has memory leakage for large images (high resolution over 1K). Basically, you need to restrict the resolution using the longer_side_len: to be a maximum of 1K or it crashes. Is there anyway we can fix this memory leakage bug ?

peymanrah avatar Jul 27 '20 18:07 peymanrah

I am not well versed in virtual machines but if you can set it to have large Paging file (aka Virtual memory), it will reach into that and won't crash depending on how much you give.

0SmooK0 avatar Jul 28 '20 19:07 0SmooK0

@peymanrah so how could we de-allocate or what do you think should be set to None to free memory resources ? I'm running on Docker and this is what I am getting:

Writing depth ply (and basically doing everything) at 1604091515.5040085 WARNING:py.warnings:/app/mesh_tools.py:174: RuntimeWarning: divide by zero encountered in true_divide input_disp = 1./np.abs(input_depth)

0%| | 0/1 [03:15<?, ?it/s] Traceback (most recent call last): File "main.py", line 523, in generate_2dto3d(config, batch_size=25) File "main.py", line 417, in generate_2dto3d process_image_2dto3d(config) File "main.py", line 302, in process_image_2dto3d depth_feat_model) File "/app/mesh.py", line 1941, in write_ply inpaint_iter=0) File "/app/mesh.py", line 1476, in DL_inpaint_edge cuda=device) File "/app/networks.py", line 311, in forward_3P edge_output = self.forward(enlarge_input) File "/app/networks.py", line 325, in forward x7 = self.decoder_2(torch.cat((x6, x1), dim=1)) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward self.padding, self.dilation, self.groups) RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 213665792 bytes. Error code 12 (Cannot allocate memory)

zdhernandez avatar Oct 30 '20 21:10 zdhernandez

Did anyone find a way to fix this issue? I am encountering the same memory leak.

firohuber avatar Aug 22 '21 06:08 firohuber