RepLKNet-pytorch icon indicating copy to clipboard operation
RepLKNet-pytorch copied to clipboard

depthwise_conv2d_implicit_gemm slower than nn.Conv2d

Open wdmwhh opened this issue 2 years ago • 10 comments

🐛 Describe the bug Calling depthwise_conv2d_implicit_gemm.DepthWiseConv2dImplicitGEMM, on CUDA, is orders of magnitude slower than calling torch.nn.Conv2d.

I have installed it according to README. image

cc: @DingXiaoH Versions torch 1.8.2+cuda11.1 cuda-11.1.1 + cudnn-8.1.1 both A100 and V100

wdmwhh avatar Apr 07 '22 08:04 wdmwhh

Here I add the speed test dwblocks_speed.py. image

test on python 3.7.11 + torch 1.8.2 + cuda-11.1.1 + cudnn-8.1.1 + V100

wdmwhh avatar Apr 07 '22 10:04 wdmwhh

10x slower: depthwise_conv2d_implicit_gemm.DepthWiseConv2dImplicitGEMM takes 0.01943465073903402s while nn.Conv2d takes 0.0012518405914306641s.

wdmwhh avatar Apr 07 '22 10:04 wdmwhh

Hi, I checked the code and found no "synchronized()" so that the time recorded may not be the actual running time on GPU. I would suggest you follow the speed test script of Swin (https://github.com/microsoft/Swin-Transformer/blob/main/main.py#L287)

DingXiaoH avatar Apr 08 '22 04:04 DingXiaoH

The test code is a small replication of the phenomenon (depthwise_conv2d_implicit_gemm slower), which occurred in training a large model.

wdmwhh avatar Apr 08 '22 06:04 wdmwhh

The code that adds torch.cuda.synchronize() before calling time.time() gives rather close time to the original code.

wdmwhh avatar Apr 08 '22 06:04 wdmwhh

This implementation is not suited for small batch sizes. In this case the batch size is 1, so the cutlass implmentation is slower than pytorch. You can try megengine instead.

xiaocenxiaocen avatar Apr 16 '22 03:04 xiaocenxiaocen

Thanks for your reply. It help me a lot.

wdmwhh avatar Apr 18 '22 01:04 wdmwhh

I meet the same question.

I trained ATSS detector with ReoLKNet31B and batch_size 1(2080Ti GPU, 11 GB memory..., and 'use_checkpoint' seems to be not compatible with DDP):

  • when use torch.nn.Conv2d(), training time is about 1.00s per iteration.
  • when use DepthWiseConv2dImplicitGEMM, training time is about 4.87s per iteration.

YanShuang17 avatar May 24 '22 08:05 YanShuang17

Hi, I encountered with the same problem. When using nn.Conv2d, the running time of the model is just ~0.5s, while using the DepthWiseConv2dImplicitGEMM, the time is ~6s. The batchsize is set to 1 owing to the memory (RTX3060, 1 single GPU, 12G).

EddieEduardo avatar May 26 '22 03:05 EddieEduardo

Thank you for sharing the results. As explained by @xiaocenxiaocen , our implementation is designed to pursue high throughput. Larger the batch size, higher the throughput.

DingXiaoH avatar May 26 '22 16:05 DingXiaoH