SADRNet icon indicating copy to clipboard operation
SADRNet copied to clipboard

There is a mismatch of the sizes of the feature maps

Open SwordHolderSH opened this issue 1 year ago • 3 comments

 File "D:\anaconda3\envs\mypytorch\lib\site-packages\torch\nn\modules\module.py", line 1488, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\Ubuntu\pyproject\3Dface_commpare\SADRNet-main\src\model\modules.py", line 540, in forward
    out += identity
RuntimeError: The size of tensor a (129) must match the size of tensor b (128) at non-singleton dimension 3

In SADRNet-main\src\model\SADRNv2.py, class SADRNv2,the input of layer0 is size of [1, 3, 256, 256], and output is size of [1, 16, 255, 255], 微信截图_20230421065011

and then, there is a mismatch of the sizes of the feature maps

微信截图_20230421064933

SwordHolderSH avatar Apr 20 '23 22:04 SwordHolderSH

Hello, did you solve this problem? I have the same error.

chenhao-user avatar May 30 '23 06:05 chenhao-user

Hello, did you solve this problem? I have the same error.

This is because different versions of Pytorch have different methods for calculating convolutional kernel sizes and padding. I modified the padding method using "if ... else"

class ConvTranspose2d_BN_AC2(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size=4, stride=1, activation=nn.ReLU(inplace=True)):
        super(ConvTranspose2d_BN_AC2, self).__init__()
        if stride % 2 == 0:
            self.deconv = nn.ConvTranspose2d(in_channels=in_channels, out_channels=out_channels,
                                             kernel_size=kernel_size, stride=stride, padding=(kernel_size - 1) // 2,
                                             bias=False)
        else:
            self.deconv = nn.Sequential(nn.ConstantPad2d((2, 1, 2, 1), 0),
                                        nn.ConvTranspose2d(in_channels=in_channels, out_channels=out_channels,
                                                           kernel_size=kernel_size, stride=stride, padding=3,
                                                           bias=False))

        self.BN_AC = nn.Sequential(
            nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.5),
            activation)

    def forward(self, x):
        out = self.deconv(x)
        out2 = self.BN_AC(out)
        return out2
def conv4x4(in_planes, out_planes, stride=1, padding=3, dilation=1, padding_mode='circular'):
    '''
    pad = 3
    dilate = 1
    stride = 2
    '''
    if stride == 2:
        padding = 1
        kernel_size = 4
    elif stride == 1:
        kernel_size = 3
        padding = 1

    return nn.Conv2d(in_planes, out_planes, kernel_size= kernel_size, stride=stride, padding=padding, bias=False,
                     dilation=dilation, padding_mode=padding_mode)

SwordHolderSH avatar May 31 '23 14:05 SwordHolderSH

Hello, I am new to this field and I am confused about how does version impacts the kernel size and padding. When I use the above approach, I still get the same error but if I keep the kernel_size as 3 and padding as 1, I am getting output with incorrect mask. My pytorch version is 2.1.0+cu118. Could you please guide me?

AdityaNair17 avatar Oct 23 '23 05:10 AdityaNair17