efficient_densenet_pytorch icon indicating copy to clipboard operation
efficient_densenet_pytorch copied to clipboard

MultiGPU efficient densenets are slow

Open wandering007 opened this issue 6 years ago • 14 comments

I just want to benchmark the new implementation of efficient densenet with the code here. However, it seems that the used checkpointed modules are not broadcast to multiple GPUs as I got the following errors:

  File "/home/changmao/efficient_densenet_pytorch/models/densenet.py", line 16, in bn_function
    bottleneck_output = conv(relu(norm(concated_features)))
  File "/home/changmao/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/changmao/anaconda3/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 49, in forward
    self.training or not self.track_running_stats, self.momentum, self.eps)
  File "/home/changmao/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1194, in batch_norm
    training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 1 does not equal 0 (while checking arguments for cudnn_batch_norm)

I think that the checkpoint feature provides weak support for nn.DataParallel.

wandering007 avatar Apr 30 '18 17:04 wandering007

Oooh @wandering007 good catch. I'll take a look.

gpleiss avatar May 08 '18 12:05 gpleiss

@gpleiss This re-implementation (https://github.com/wandering007/efficient-densenet-pytorch) has good support for nn.DataParallel, which may be helpful.

wandering007 avatar May 09 '18 08:05 wandering007

i submitted a pull request for this: https://github.com/gpleiss/efficient_densenet_pytorch/pull/39

ZhengRui avatar May 13 '18 22:05 ZhengRui

Just merged in #39 . @wandering007 , can you confirm that this fixes the issue?

gpleiss avatar May 13 '18 23:05 gpleiss

@gpleiss Yes, it works fine.
However, there is one thing that I've noticed before and have to mention, though it is out of the scope of this issue. With checkpointing feature, the whole autograd computation graph is broken into pieces. And the current nn.DataParallel backward process is roughly doing 1) backward in each GPU asynchronously and 2) inter-GPU communication for collecting/gathering weight gradients for each autograd computation graph. That is, if one checkpoint contains weights for updating, there would be an inter-gpu synchronization process for accumulating the gradients in it, which is time-consuming. Considering that the current efficient DenseNet contains so many checkpointed nn.BatchNorm2d modules, a lot of time will be spent on the inter-GPU communications for gradient accumulation. From my test, the backward of efficent DenseNet for multi-GPUs is at least 100x slower than the normal version...

wandering007 avatar May 14 '18 04:05 wandering007

@wandering007 hmmm that is problematic...

In general, I think that the checkpointing-based approach is probably what we should be doing moving forward. The original version was using some low-level calls which are no longer available in PyTorch. Using those low-level calls would require some C code, which is in my opinion undesirable for this package.

However, it sounds like the checkpointing-based code is practically unusable for the multi-GPU scenario. It's probably worthwhile bringing up an issue in the PyTorch repo about this. I'll see if there's a better solution in the meantime.

gpleiss avatar May 14 '18 12:05 gpleiss

@gpleiss It may be tough for now...To be frank, I am still in favor of the previous implementation (v0.3.1) via _EfficientDensenetBottleneck class and _DummyBackwardHookFn function without touching any C code. I've just made some improvements on it and it seems very neat and workable for PyTorch v0.4. You can check https://github.com/wandering007/efficient-densenet-pytorch/tree/master/models if you are interested.

wandering007 avatar May 24 '18 03:05 wandering007

Maybe this issue could have been made more clear in the readme. I followed the implementation in my project but found it doesn't work with dataparallel ...

yzcjtr avatar Oct 21 '18 04:10 yzcjtr

@yzcjtr you might be experiencing a different problem. According to my tests, this should work with DataParallel. Can you post the errors that you're seeing?

gpleiss avatar Oct 21 '18 13:10 gpleiss

I just got the Segmentation fault (core dumped) error when running with multiple GPUs. Does anyone know how to solve this problem?

theonegis avatar Oct 23 '18 20:10 theonegis

@theonegis can you provide more information? What version of PyTorch, what OS, what version of CUDA, what GPUs, etc.? Also, could you open up a new issue for this?

gpleiss avatar Oct 23 '18 20:10 gpleiss

@gpleiss I have opened a new issues. Segmentation fault (core dumped) error for multiple GPUs. Thanks a lot.

theonegis avatar Oct 23 '18 20:10 theonegis

Hi @gpleiss , really sorry for my previous misunderstanding. I'm confronted with a similar situation as @theonegis . I will provide more information in his new issue. Thanks.

yzcjtr avatar Oct 23 '18 20:10 yzcjtr

The PyTorch official checkpointing is slow on MultiGPUs as explained by @wandering007 . https://github.com/csrhddlam/pytorch-checkpoint solves this issue.

csrhddlam avatar Dec 01 '18 23:12 csrhddlam