deepcut icon indicating copy to clipboard operation
deepcut copied to clipboard

GPU out of memory error

Open mrcharlie90 opened this issue 7 years ago • 3 comments

Hello everyone, I'm testing your code for a project and I'm getting some of the following errors when I run demo_multiperson.m:

(in the console) F0923 15:47:45.735599 4029 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0) out of memory *** Check failure stack trace: *** Killed

(in matlab) Cleared 0 solvers and 0 stand-alone nets save dir .../git/deepcut/data/mpii-multiperson/scoremaps/test testing from net file /home/marco/Desktop/mauro-skeletal-tracker/git/deepcut/data/caffe-models/ResNet-101-mpii-multiperson.caffemodel deepcut: test (MPII multiperson test) 2/1758

with a Matlab crash.

My video card is an NVidia GeForce GTX 760 Ti 2GB.

I'm new to deep learning and Caffe, but I've read on the web that sometimes is possible to run tests on less memory capable graphics cards, as in my case, by changing some parameters (like the batch-size). Is that possible in deepcut? Where could I change those parameters in your code? Thank you in advance!

mrcharlie90 avatar Sep 23 '16 14:09 mrcharlie90

My NVidia card has 4 GB and Matlab still crashes when running this code. I had to run the code on a 12 GB Titan card.

minhtriet avatar Sep 30 '16 13:09 minhtriet

Thank you for your answer. I've tried also on a computer with set_cpu_mode on and the demo works.

mrcharlie90 avatar Sep 30 '16 17:09 mrcharlie90

CPU_ONLY gives me this error src/caffe/layers/softmax_loss_vec_layer.cpp:254:10: error: redefinition of ‘void caffe::SoftmaxWithLossVecLayer<Dtype>::Forward_gpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&)’ STUB_GPU(SoftmaxWithLossVecLayer);. I had opened a new issue here.

minhtriet avatar Oct 04 '16 14:10 minhtriet