densenet.pytorch icon indicating copy to clipboard operation
densenet.pytorch copied to clipboard

Multi-GPU implementation

Open Lyken17 opened this issue 7 years ago • 4 comments

Hi author

Thanks for sharing your code. I notice in README you said "Multi-gpu help wanted". If you are indicating data parallelism, then it can be implemented in several lines in pytorch using nn.DataParallel.

In your train.py line 82, simply modify code

    if args.cuda:
        net = net.cuda()

to

    if args.cuda:
        net = net.cuda()
        net = nn.DataParallel(net, devices=[0,1,2,3])

can make whole model parallel.

Lyken17 avatar Mar 24 '17 07:03 Lyken17

I wonder, did you not implement it for some concerns? Like gradient correctness, numerical stability, convergence ... I just migrated from torch, and heard there are still some bugs in pytorch.

Lyken17 avatar Mar 24 '17 07:03 Lyken17

I had the same question. @Lyken17 Did you train on multi-gpus?

varun-suresh avatar May 04 '17 14:05 varun-suresh

Hi, I didn't try training on multiple GPUs. The issues @Lyken17 mentions can potentially happen, but I wouldn't expect hem to happen.

bamos avatar May 04 '17 15:05 bamos

Hello @varun-suresh @bamos , though this reply is two months late, I want to tell you multi-gpus work as expect on pytorch.

I use default setting in code. When using single gpu, I get error rate of 5.01%. When using 2 gpus, I get error rate of 4.67%. Experiments on 3 and 4 gpus are on the way, I believe it will converge well. I will push a PR after experiments.

  • One GPU 1 GPU

  • Two GPUs python train.py --gpus 0,1 54374.10s user 3590.60s system 143% cpu 11:12:57.53 total loss-error

  • Three GPUs loss-error

PS: I love 1080ti -- the most cost efficient card! I can buy more cards with the same cost.

Lyken17 avatar Jul 19 '17 23:07 Lyken17