tutorials
tutorials copied to clipboard
I think it's a general problem when the input to the functional layer is dynamic.
I think it's a general problem when the input to the functional layer is dynamic. I had a situation where functional avg_pool3d that depended on the shape of the previous layer's outputs. One has to either make the kernel constant or switch to non-functional pytorch's api.
Does anybody know how can I make the kernel size static here ?
class GeM(nn.Module): def init(self, p=3, eps=1e-6): super(GeM,self).init() self.p = nn.Parameter(torch.ones(1)*p) self.eps = eps
def forward(self, x):
return self.gem(x, p=self.p, eps=self.eps)
def gem(self, x, p=3, eps=1e-6):
return F.avg_pool2d(x.clamp(min=eps).pow(p), (x.size(-2), x.size(-1))).pow(1./p)
def __repr__(self):
return self.__class__.__name__ + '(' + 'p=' + '{:.4f}'.format(self.p.data.tolist()[0]) + ', ' + 'eps=' + str(self.eps) + ')'