3D-UNet
3D-UNet copied to clipboard
Why is the code always automatically selected to run on Gpus 0
import os os.environ['CUDA_VISIBLE_DEVICES'] = '1'
Why do I use the above code to select gpu # 1 but the code is still displayed to run on gpu # 0 when the error is reported??
RuntimeError: CUDA out of memory. Tried to allocate 7.00 GiB (GPU 0; 23.70 GiB total capacity; 21.29 GiB already allocated; 870.81 MiB free; 21.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
@wzh0328, because when model or data is transformed to GPU (model.cuda() or data.cuda()) it will be on the GPU 0 by default. If you want to change the GPU, the you need to specify the GPU number (e.g. model.to("cuda:1") or data.to("cuda:1")).
or
if you run the code using python script, then try to use CUDA_VISIBLE_DEVICES=1 python train.py it will automatically ignore 0th GPU and will see only 1st GPU.