densenet.pytorch
densenet.pytorch copied to clipboard
Multi-GPU implementation
Hi author
Thanks for sharing your code. I notice in README you said "Multi-gpu help wanted". If you are indicating data parallelism, then it can be implemented in several lines in pytorch using nn.DataParallel.
In your train.py line 82, simply modify code
if args.cuda:
net = net.cuda()
to
if args.cuda:
net = net.cuda()
net = nn.DataParallel(net, devices=[0,1,2,3])
can make whole model parallel.
I wonder, did you not implement it for some concerns? Like gradient correctness, numerical stability, convergence ... I just migrated from torch, and heard there are still some bugs in pytorch.
I had the same question. @Lyken17 Did you train on multi-gpus?
Hi, I didn't try training on multiple GPUs. The issues @Lyken17 mentions can potentially happen, but I wouldn't expect hem to happen.
Hello @varun-suresh @bamos , though this reply is two months late, I want to tell you multi-gpus work as expect on pytorch.
I use default setting in code. When using single gpu, I get error rate of 5.01%. When using 2 gpus, I get error rate of 4.67%. Experiments on 3 and 4 gpus are on the way, I believe it will converge well. I will push a PR after experiments.
-
One GPU
-
Two GPUs
python train.py --gpus 0,1 54374.10s user 3590.60s system 143% cpu 11:12:57.53 total
-
Three GPUs
PS: I love 1080ti -- the most cost efficient card! I can buy more cards with the same cost.