i got memory error when i run demo.py
RuntimeError: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 7.93 GiB total capacity; 6.82 GiB already allocated; 68.50 MiB free; 65.36 MiB cached)
Hi, demo.py adopts ResNet152 backbone and multi-scale test, which consumes much GPU memory. You can try to use less image scales for inference. We will release a more fine model.
Dear all, I also had the 'out of memory' problem. My problem is that the volatile flag for tensors is now deprecated (torch>=0.4). So the network keeps gradient while doing inference.
If you just want to do evaluation, add torch.set_grad_enabled(False) inside test_oneimage() function (at the beginning of the function).
It works for me, and the memory takes only around 2Gb instead of 9 Gb.
Be careful that it works only for test time since you discard all the gradient.
@yihongXU
This did not help. I also put with torch.no_grad(): at the beginning of def infer() but neither worked.
I also commented out parts for det_b and det_s and just let them equal to det0 and det1 (because I am too confused to refactor more) but still getting memory errors.