deeppy
deeppy copied to clipboard
Expected runtime for convnet_mnist.py?
Is there an expected (ballpark, rough estimate) runtime for convnet_mnist.py? I ran mlp_mnist.py and the script finished extremely quickly. But for convnet_mnist.py, I've been sitting at the same output for over 30 minutes, which seems extremely high given that the Caffe MNIST examples finishes in a couple minutes:
INFO SGD: Model contains 127242 parameters. INFO SGD: 469 mini-batch gradient updates per epoch. (no extra output after this)
I was able to profile the GPU and it turns out the CPU was being utilized the entire time (hence the long runtimes). I tried to compile the cudarray dependency with cuDNN support, but that lead to compilation errors. Is it possible to use deeppy on the GPU without cuDNN?
Hey @jrosebr1 Yes it is possible to compile and run cudarray on the GPU without cuDNN. Then the matmul functions will be used. This can be controlled by setting CUDNN_ENABLED
@lre Thanks for the comment. Just to clarify: setting CUDNN_ENABLED=1 will compile cudarray with cuDNN support (and in my case, leads to a compilation error). Given this, I removed the CUDNN_ENABLED environment variable and compiled cudarray as is. Was I supposed to set CUDNN_ENABLED=0 to indicate that I still want GPU support?
@jrosebr1: Sorry about the lack of response from my part. I have been unable to work due to illness.
From your first message it sounds like an error is preventing you from using the GPU. When using the GPU, CUDArray/DeepPy is very competitive speed-wise.
Regarding CUDNN_ENABLED=0: In this case, CUDArray falls back to convolution by matrix multiplications on the GPU (Caffe style). While this is pretty fast compared to a CPU, I recommend using cuDNN.
Feel free to ignore this post as you have probably moved on since then! :)
@andersbll Thanks for the reply! I'll be sure to give cuDNN another try. I'm still not exactly sure what the error was in this case. When I set CUDNN_ENABLED=1 errors ended up being thrown. And when CUDNN_ENABLED=0, only the GPU was being utilized.
Ok! Let me know if you run into any error messages.