[BUG] Usage of Dictionaries will cause a "Memory Leak" in Multi-Node Environment
Description
I am working on a SLURM multi-node architecture where each node has two GPUs. In such a multi-process environment, the usage of a dictionaries (i.e. samples provided by torchgeo-datamodules) will cause -- quote -- "copy-on-access problem of forked python processes". See https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662 for more information. This refcounting problem leads to an ever increasing usage of memory, which will sooner or later cause the process to crash.
Steps to reproduce
Use any datamodule providing dictionaries as samples on multi-node environment
Version
0.6.0
Hi @MathiasBaumgartinger, apologies for the delay. This issue slipped under the radar during the AGU/holiday rush and then I got busy with teaching. I'm now looking into this and have read all of https://github.com/pytorch/pytorch/issues/13246.
We've received a few other reports of memory issues, specifically #1438 (@patriksabol), #1578 (@trettelbach), and #1694 (@pmaldonado). @yichiac also reported something similar but didn't open an issue. I'm now wondering if what we thought was an issue with GDAL's cache was actually an issue with copy-on-write multiprocessing behavior. Or it's possible that Rtree or Python's LRU cache behave similarly to Python lists/dicts.
From talking to the other maintainers, it isn't clear if the issue is due to the fact that we return dicts, or due to the fact that we store information in lists/dicts as dataset attributes. I'll also be investigating this and report back once I've found a good reproducer.
Can anyone share a specific builtin dataset/datamodule that they have encountered this issue with? A YAML file with the settings you used would be especially helpful. I haven't yet managed to reproduce the issue with the datasets I've tried, but I also haven't yet found a multi-node environment I can test on. Also, what tools do you use to visualize memory usage, just the builtin Activity Monitor?