AttentionedDeepPaint
AttentionedDeepPaint copied to clipboard
Cuda Causes Trainer to Hang
Weird question, I have 2 cuda cards and the code only uses one. It runs great on my laptop because its on the CPU. Can you tell me how to have it identify 2nd card or force CPU?
RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 8.00 GiB total capacity; 6.15 GiB already allocated; 58.05 MiB free; 83.38 MiB cached)
I removed cuda from the workstation and works great on cpu
Sorry for late reply :( If you want to use second cuda cards, please check that
$ nvidia-smi
shows 2 cards well. If you want to use forced CPU, just replace
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
to
device = 'cpu'
nvidia-smi gave me +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 1070 Off | 00000000:06:00.0 Off | N/A | | 28% 33C P8 5W / 151W | 2MiB / 8119MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX 1070 Off | 00000000:07:00.0 On | N/A | | 0% 49C P8 15W / 151W | 1106MiB / 8118MiB | 3% Default | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 1 1352 G /usr/lib/xorg/Xorg 28MiB | | 1 1394 G /usr/bin/gnome-shell 50MiB | | 1 1749 G /usr/lib/xorg/Xorg 184MiB | | 1 2225 G ...uest-channel-token=12045571343934226885 690MiB | | 1 17690 G /usr/bin/gnome-shell 148MiB | +-----------------------------------------------------------------------------+ Thank you.. I'll change the code.