cp-vton-plus
cp-vton-plus copied to clipboard
train.py is using cpu instead of gpu
after I started the train.py with this command:
python train.py --name GMM --stage GMM --workers 1 --save_count 5000 --shuffle
and after 1000 steps, this is the cpu and gpu usage:

what can I do to solve this? I want to train using my gpu
@iamnaazib , you can add your gpu id(s) here: https://github.com/minar09/cp-vton-plus/blob/master/train.py#L21 as parser.add_argument("--gpu_ids", default="0")
or add the argument in the run command: python train.py --name GMM --stage GMM --workers 1 --save_count 5000 --shuffle --gpu_ids 0
@iamnaazib , you can add your gpu id(s) here: https://github.com/minar09/cp-vton-plus/blob/master/train.py#L21 as
parser.add_argument("--gpu_ids", default="0")or add the argument in the run command:python train.py --name GMM --stage GMM --workers 1 --save_count 5000 --shuffle --gpu_ids 0
I did that but its still using cpu idk why its doing that. A single image is taking 3-4 hours to train. Can I know which hardware you used to train your dataset??
@iamnaazib, please check your GPU drivers and environments maybe. We used TITAN Xp GPUs for our experiments.
have checked and there doesn't seem to be any issue with the drivers and environments
could you still tell me what are the driver and environment requirements for a nvidia graphics card to run this training
the gpu spikes to 100% usage at the start but doesn't get used anymore
I think you can first check if pytorch can access your gpus. For example, please check this: https://stackoverflow.com/questions/48152674/how-to-check-if-pytorch-is-using-the-gpu or this: https://discuss.pytorch.org/t/torch-cuda-is-available-is-true-while-i-am-using-the-gpu/29470