pygmtsar
pygmtsar copied to clipboard
[Help]: Gaussian detrending out of memory for large stacks
Is Gaussian detrending being performed for pixels across the entire time stack? I'm running out of memory and have tried reducing chunksize to 128 with the same problem. My system has 128GB RAM.
Here is the unwrap phase stack:
Jan 22 22:14:33 steffan systemd-oomd[664]: Killed /user.slice/user-1000.slice/[email protected]/app.slice/app-org.gnome.Terminal.slice/vte-spawn-93055cd3-b868-4ce5-82f7-fadf47971679.scope due to memory pressure for /user.slice/user-1000.slice/[email protected] being 69.24% > 50.00% for > 20s with reclaim activity
Jan 22 22:14:33 steffan systemd[2141]: vte-spawn-93055cd3-b868-4ce5-82f7-fadf47971679.scope: systemd-oomd killed 118 process(es) in this unit.
The error on the plot is related to the Tornado web interface communication and is not a code issue; we can ignore it. Your OOM killer message notifies that 'memory pressure for ... being 69.24% > 50.00% for > 20s with reclaim activity,' and it means only 64GB RAM is available due to your system settings. Probably, the processing notebook uses more than 64GB RAM from the 128 GB available, and it's normal behavior to utilize available RAM for 80-90%. ou can configure the Dask scheduler to use only 64GB of available RAM if you prefer to limit your RAM usage so strong.
I've simulated a large stack, and it works well on my Apple Silicon host with 16 GB of RAM:
I've simulated a large stack, and it works well on my Apple Silicon host with 16 GB of RAM:
![]()
I see you split the stack in 10. I was willing to do something similar but wasn't sure if the Gaussian function worked across the time dimension. But I guess it works independently on each pair.
Yes, it works on 2D grids. In fact, we can speed up all the code by making the 'pair' dimension mandatory for stacks. Currently, we always need to check whether the input is a single 2D grid or a 3D stack. This approach is more flexible but comes at a cost.