soft-sharing
soft-sharing copied to clipboard
Handling depth-wise convolution
Hello, I understand that this is quite an old repo. But I wanted to try my luck here.
How can the SConv2d handle depth-wise convolution. Whenever I have tried to include the groups parameter with the final F.conv2d
function, it has thrown a shape mismatch error.
For clarity:
I want to replace this convolution operation
nn.conv2d
: Conv2d(486, 486, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=486, bias=False)
And this is my corresponding SConv2d object:
SConv2d(torch.Size([1, 486, 486, 3, 3]), stride=1, padding=1) with torch.Size([1, 1, 1, 1, 1]) coefficients.
Now, whenever I try to use a group value with the SConv2d function, it breaks down with shape mismatch.
RuntimeError: Given groups=486, weight of size [486, 486, 3, 3], expected input[2, 486, 112, 112] to have 236196 channels, but got 486 channels instead
I would be grateful for any suggestions.