antialiased-cnns icon indicating copy to clipboard operation
antialiased-cnns copied to clipboard

Bilinear Upsampler Implementation

Open sr-frost opened this issue 4 years ago • 1 comments

Hi, it's really refreshing to see signal processing principles used in deep networks. I have a question about the upsampling mechanism.

After going through the original code and a related issue #28, following is my attempt to implement a bilinear-upsampler:

class BilinearUpsample(nn.Module):
    def __init__(self, channels=None, interpolation_factor=2):
        super(BilinearUpsample, self).__init__()
        self.channels = channels
        self.stride = interpolation_factor

        lpf = torch.tensor([[1., 2., 1.],
                            [2., 4., 2.],
                            [1., 2., 1.]])
        lpf = lpf/torch.sum(lpf)*4
        lpf = lpf.unsqueeze(dim=0).unsqueeze(dim=0)
        lpf = lpf.repeat([self.channels, 1, 1, 1])
        self.register_buffer('lpf', lpf)

    def forward(self, x):
            return F.conv_transpose2d(input = x, weight = self.lpf, stride=self.stride, padding = 1, output_padding=1, groups=self.channels)

Is this implementation correct ? The result seems very different from PyTorch bilinear interpolation.

Using

BilinearUpsample(channels=num_channels)(a)

vs

F.interpolate(a, scale_factor=2, mode='bilinear', align_corners=True) 

returns very different output.

sr-frost avatar Mar 28 '20 21:03 sr-frost

Shouldnt you have 4x4 kernel?

hadaev8 avatar Nov 05 '21 18:11 hadaev8