VisCy icon indicating copy to clipboard operation
VisCy copied to clipboard

OOM issues with 3D FCMAE fine-tuning

Open edyoshikun opened this issue 1 year ago • 2 comments

Currently if we use the ddp and the fcmae model for fine-tuning for the virtual staining tasks, there seems to be a 'memory leak'. The solution could be to expose these parameters at the ViscyTrainer.

Using PyTorch Lightning’s CombinedLoader with Distributed Data Parallel (DDP spawns multiple processes (one per GPU) and seems to lead to excessive accumulation in a subset of worker processes. Setting persistent_workers=False restarts the DataLoader workers at the beginning of each epoch, which prevents the accumulation of memory or disk space. There is a performance trade-off here as well as reducing the hardcoded prefetch factor from 4 to 2.

edyoshikun avatar Nov 05 '24 23:11 edyoshikun

Using the prefetch=4 vs prefetch=2 has no effect on the training speed for the neuromast VS training. Here we are mostly limited by CPU->GPU pipes.

edyoshikun avatar Nov 05 '24 23:11 edyoshikun

When I enable pinned memory in #195 I see this issue: https://github.com/pytorch/pytorch/issues/97432. But this is likely not related to the HCS datamodule since that one is not using pinned memory.

ziw-liu avatar Nov 14 '24 18:11 ziw-liu