yolo2-pytorch
yolo2-pytorch copied to clipboard
Does self.global_average_pool do anything?
The darknet model defines a global average pooling layer as follows:
# linear
out_channels = cfg.num_anchors * (cfg.num_classes + 5)
self.conv5 = net_utils.Conv2d(c4, out_channels, 1, 1, relu=False)
self.global_average_pool = nn.AvgPool2d((1, 1))
The forward func uses it as such:
conv4 = self.conv4(cat_1_3)
conv5 = self.conv5(conv4) # batch_size, out_channels, h, w
global_average_pool = self.global_average_pool(conv5)
However, the kernel size of nn.AvgPool2d
is 1x1. I'm confused as to what --- if anything --- this is doing. It seems like a no-op. When stepping through the code I've confirmed that np.all(conv5 == gapooled)
is True.
Is this a bug?
I have the same confusion and hope someone can answer it.
Thanks in advance.
Yes, you are right. nn.AvgPool2d((1,1))
is a no-op.
It is introduced in https://github.com/longcw/yolo2-pytorch/commit/7fa25e1653eaf2dc84c0bd50804a1530f88501ac. I merged this without carefully code review. Sorry for that and you can remove this in your code.
@longcw Thank you for answering and thanks for the code!
The main reason I found this confusing is that the YOLO9000 paper mentions global average pooling. However, on careful inspection it turns out that not even the original darknet code uses it.
我就说nn.AvgPool2d((1,1))好像啥都没做啊,我想如果要实现论文里面所说的最后的全局池化的话,那应该用nn.AdaptiveAvgPool2d((1, 1))