fpn.pytorch
fpn.pytorch copied to clipboard
GPU Memory issues about the ROI Pooling lib
Hi! Many thanks to your wonderful codes. I separately move the lib of roi pooling part to some of my own projects and it runs correctly after compiling. However, when I ran my code equipped with this lib, I found the GPU memory is increasing from 2G to over 8G and then CUDA throws 'run out of memory' error. I think this might be the different roi numbers for each iteration and the mechanism of dynamic graphs in pytorch. Since the roi number differs each iteration, there is always new generated dynamic graph, thus the GPU occupation always goes on. I hope you could provide some suggestions to revise the code and keep the GPU memory the same while running, thanks!
i agree. I use 1080Ti to train.Always out of memory, even in test period.