Pyramid-Attention-Networks-pytorch icon indicating copy to clipboard operation
Pyramid-Attention-Networks-pytorch copied to clipboard

Positioning of RELU in FPA

Open JohnMBrandt opened this issue 4 years ago • 0 comments

In networks.py, lines 123 - 124:

x3_upsample = self.relu(self.bn_upsample_3(self.conv_upsample_3(x3_2)))
x2_merge = self.relu(x2_2 + x3_upsample)

I know that x2_2 has a linear activation, why does x3_upsample have a relu, if you then relu it again after the addition?

JohnMBrandt avatar Sep 16 '20 18:09 JohnMBrandt