Is 6G GPU memory enough for training?
I have 2 GPUs on my PC, each has 6G memory. I can train rbg's py-faster-rcnn project on one of them.But when I run /faster_rcnn_pytorch/train.py of this project , suddenly out of memory.
I refer to FFRCNN project, they said that
For training the end-to-end version of Faster R-CNN with VGG16, 3G of GPU memory is sufficient (using CUDNN)
So I'm very confused How big memory do I need to run /faster_rcnn_pytorch/train.py ? Or, Could this run on 2 GPUs in parallel?
Thanks.
I think 6G memory is enough. When I trained with running train.py, it cost about 4G memory.
I think 6G memory is enough. When I trained with running
train.py, it cost about 4G memory.
hello I run the train.py with 8G gpu ,but it out of memory,I used torch 0.4.1,what shuoud I do if I want to run the train, which parameter can i tun? thank you very much