antialiased-cnns
antialiased-cnns copied to clipboard
Bilinear Upsampler Implementation
Hi, it's really refreshing to see signal processing principles used in deep networks. I have a question about the upsampling mechanism.
After going through the original code and a related issue #28, following is my attempt to implement a bilinear-upsampler:
class BilinearUpsample(nn.Module):
def __init__(self, channels=None, interpolation_factor=2):
super(BilinearUpsample, self).__init__()
self.channels = channels
self.stride = interpolation_factor
lpf = torch.tensor([[1., 2., 1.],
[2., 4., 2.],
[1., 2., 1.]])
lpf = lpf/torch.sum(lpf)*4
lpf = lpf.unsqueeze(dim=0).unsqueeze(dim=0)
lpf = lpf.repeat([self.channels, 1, 1, 1])
self.register_buffer('lpf', lpf)
def forward(self, x):
return F.conv_transpose2d(input = x, weight = self.lpf, stride=self.stride, padding = 1, output_padding=1, groups=self.channels)
Is this implementation correct ? The result seems very different from PyTorch bilinear interpolation.
Using
BilinearUpsample(channels=num_channels)(a)
vs
F.interpolate(a, scale_factor=2, mode='bilinear', align_corners=True)
returns very different output.
Shouldnt you have 4x4 kernel?