cudnn.torch icon indicating copy to clipboard operation
cudnn.torch copied to clipboard

SpatialConvolution running in fully-connected mode is very slow, even with R4

Open vadimkantorov opened this issue 9 years ago • 7 comments

cudnnR4 doesn't choose the optimal algorithm in the fully-connected mode, even with cudnn.benchmark = true, which results in ~20x slower backward pass compared to MatConvNet.

Torch:

--TORCH OUTPUT (in seconds)
--forward   0.24779605865479    
--backward  4.5414280891418 
--forward   0.051395893096924   
--backward  4.5211651325226 
--forward   0.054457902908325   
--backward  4.5210771560669

-- with cudnn.benchmark = true
--forward   14.457499027252 
--backward  0.98335909843445    
--forward   0.045572996139526   
--backward  0.98773503303528    
--forward   0.045454025268555   
--backward  0.98268413543701


require 'cudnn'
require 'hdf5'

function gpuTicToc(f)
    cutorch.synchronize()
    local tic = torch.tic()
    f()
    cutorch.synchronize()
    return torch.toc(tic)
end

model = cudnn.SpatialConvolution(256, 4096, 6, 6, 1, 1):cuda(); model.weight:fill(1); model.bias:fill(1)
input = torch.CudaTensor(1600, 256, 6, 6):fill(1):cuda()

for i = 1, 3 do
  model:zeroGradParameters()
  print('forward', gpuTicToc(function()
      model:forward(input)
  end))

  one = torch.CudaTensor():resize(model.output:size()):fill(1)
  print('backward', gpuTicToc(function()
      model:backward(input, one)
  end))
end

model:float()

h = hdf5.open('test.h5', 'w')
h:write('/output', model.output)
h:write('/gradInput', model.gradInput)
h:write('/gradWeight', model.gradWeight)
h:write('/gradBias', model.gradBias)
h:close()

MatConvNet:

%MATLAB OUTPUT (in seconds)
%
%forward 0.224209
%backward 0.046167
%forward 0.045812
%backward 0.044633
%forward 0.043401
%backward 0.044506
%
%output diff: 0.000000
%gradInput diff: 0.000000
%gradWeight diff: 0.000000
%gradBias diff: 0.000000


%addpath('matconvnet-1.0-beta18/matlab'); vl_compilenn('EnableGpu', true, 'EnableCudnn', true, 'CudnnRoot', '/home/kantorov/cudnnR4');
run('matconvnet-1.0-beta18/matlab/vl_setupnn.m');

weight = gpuArray(ones(6, 6, 256, 4096, 'single'));
bias = gpuArray(ones(1, 4096, 'single'));

input = gpuArray(ones(6, 6, 256, 1600, 'single'));
one = gpuArray(single(ones(1, 1, 4096, 1600)));

for i = 1:3
    wait(gpuDevice); tic;
    output = vl_nnconv(input, weight, bias);
    wait(gpuDevice); fprintf('forward %f\n', toc);

    wait(gpuDevice); tic;
    [dzdx, dzdf dzdb] = vl_nnconv(input, weight, bias, one);
    wait(gpuDevice); fprintf('backward %f\n', toc);
end

torch_output = h5read('test.h5', '/output');
torch_gradInput = h5read('test.h5', '/gradInput');
torch_gradWeight = h5read('test.h5', '/gradWeight');
torch_gradBias = h5read('test.h5', '/gradBias');

fprintf('output diff: %f\n', sum(abs(reshape(torch_output, [], 1) - reshape(output, [], 1))));
fprintf('gradInput diff: %f\n', sum(abs(reshape(torch_gradInput, [], 1) - reshape(dzdx, [], 1))));
fprintf('gradWeight diff: %f\n', sum(abs(reshape(torch_gradWeight, [], 1) - reshape(dzdf, [], 1))));
fprintf('gradBias diff: %f\n', sum(abs(reshape(torch_gradBias, [], 1) - reshape(dzdb, [], 1))));

Replacing cudnn.SpatialConvolution with nn.Linear makes Torch and MatConvNet even:

--TORCH OUTPUT (in seconds) with nn.Linear
--forward   0.046329975128174   
--backward  0.048556089401245   
--forward   0.045660018920898   
--backward  0.046145915985107   
--forward   0.045567989349365   
--backward  0.043753862380981

vadimkantorov avatar Feb 19 '16 21:02 vadimkantorov

Actually MatConvNet's convolution layer automatically switch to fully-connected layer if input size==kernal size. You can manually do the same thing in torch: example: (input size (NCHW) = 256x512x7x7, output(NxFeatureSize) = 256x4096)

model:add(nn.View(7*7*512))
model:add(nn.Linear(7*7*512,4096))

Jerrynet avatar Feb 22 '16 07:02 Jerrynet

This is actually surprising, because the cudnn convolution has implicit, explicit gemms, as well as a bunch of other algorithms. Maybe their gemm is lagging behind the CUDA gemm. Would you know anything about this @ngimel ?

soumith avatar Feb 28 '16 01:02 soumith

For backward, the selection of algorithms is smaller (in particular, there is no explicit gemm), and they are not particularly optimized for the case where input size = kernel size. cudnn does not have a runtime dependency on cublas, and includes only a limited subset of cublas gemm kernels, so even if explicit gemm algorithms were added to backward path, there conceivably could be many situations where cudnn would be slower than cublas. I think it is best (as suggested by @vadimkantorov and @Jerrynet) to convert SpatialConvolution to Linear when input size = kernel size.

ngimel avatar Feb 28 '16 03:02 ngimel

thanks Natalia! it is often convenient to keep SpatialConvolution for 1x1, I think we should add nn.Linear.updateOutput(self,input) like-calls with views around for this special case

szagoruyko avatar Feb 28 '16 13:02 szagoruyko

Sergey, please note that 1x1 SpatialConvolution in NCHW does not map directly onto Linear (it would for NHWC layout for images, and similar for filters), and for Maxwell cudnn performance for this case (NCHW) should be pretty similar to cublas anyway. I don't remember Kepler benchmarks off the top of my head. The original issue was about convolution where image H*W = kH*hW, where cudnn performance can be pretty bad. It generally does not do too good with odd (as in: not small, not square) filter sizes, especially on backward.

ngimel avatar Feb 28 '16 18:02 ngimel

@ngimel afaik 1x1 SpatialConvolution in NCHW DOES map to Linear. We have used this trick many times. I think it is because gemm allows transpose as a mode. Here's a simple test case:

require 'nn'

a = nn.Linear(128, 32)

b = nn.SpatialConvolution(128, 32, 1, 1)
b.weight:copy(a.weight);
b.bias:copy(a.bias);

input = torch.randn(16, 128, 1, 1)

outlinear = a:forward(input:view(16,128))
outconv = b:forward(input)

print((outlinear - outconv):abs():max())

And the output is 8.8817841970013e-16

soumith avatar Feb 28 '16 18:02 soumith

ohh, i assume that you are talking for larger inputs. Yes, indeed it does not map. It only maps correctly as you said, when H*W = kH*kW. sorry for the confusion.

soumith avatar Feb 28 '16 18:02 soumith