faster_rcnn_pytorch icon indicating copy to clipboard operation
faster_rcnn_pytorch copied to clipboard

Is 6G GPU memory enough for training?

Open squirrel233 opened this issue 8 years ago • 2 comments

I have 2 GPUs on my PC, each has 6G memory. I can train rbg's py-faster-rcnn project on one of them.But when I run /faster_rcnn_pytorch/train.py of this project , suddenly out of memory.

I refer to FFRCNN project, they said that

For training the end-to-end version of Faster R-CNN with VGG16, 3G of GPU memory is sufficient (using CUDNN)

So I'm very confused How big memory do I need to run /faster_rcnn_pytorch/train.py ? Or, Could this run on 2 GPUs in parallel?

Thanks.

squirrel233 avatar Aug 10 '17 02:08 squirrel233

I think 6G memory is enough. When I trained with running train.py, it cost about 4G memory.

bywbilly avatar Aug 18 '17 21:08 bywbilly

I think 6G memory is enough. When I trained with running train.py, it cost about 4G memory.

hello I run the train.py with 8G gpu ,but it out of memory,I used torch 0.4.1,what shuoud I do if I want to run the train, which parameter can i tun? thank you very much

jilner avatar Sep 16 '18 12:09 jilner