GPU memory usage + speed
Hi, I'm kinda new on this field, I'm studying computer Enginering and I'm trying to train your NN with data of my own, the problem comes when I launch train and get: +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 3113 C python 74MiB | +-----------------------------------------------------------------------------+ On a GeForce GTX 980 with 4036MiB. And get ~4 hours per epoch. I dont know if this is due to my data, the NN or some kind of option I have not discovered yet, I would really appreciate the help. Thanks in advance.
I have the same problem. In my opinion, there are some delay nn.rpn to nn.classifier (gpu-cpu-gpu)
Reduce the number of epochs
3L: why reduce the number of epochs is helpful?
2L: I trained the rpn seperately, but i still have this problem.
OK!! Finally, i find the reason. Don't use conda install keras in anaconda, it will use the cpu version tensorflow. conda uninstall keras, then pip install keras, this time ,i can use all my gpu memory.