guided-diffusion
guided-diffusion copied to clipboard
is distributed training on multiple CPUs possible?
In train_util.py, there is a warning message "Distributed training requires CUDA. ", "Gradients will not be synchronized properly!". But pytorch.distributed suports distributed training on CPUs. So what does this warning message mean?