focal-tversky-unet icon indicating copy to clipboard operation
focal-tversky-unet copied to clipboard

Gating Signal before Convolution

Open ghost opened this issue 1 year ago • 0 comments

Hey,

I was working through the paper and the code together. In the Paper in Figure 2, at each level the output of the convolution is passed to the next up convolution and is used for the gating signal. However, in the code for the first upconvolution this is consistent (using center, which is output of the convolutional block). But for the next levels in the expanding path, there is no 3x3 convolutions applied for the input for the next level and the gating signal, but just previous concatenated "attn*" and "up*".

But for the output at each level, covolutional blocks are applied (not just a single 3x3 convolution): conv6 = UnetConv2D(up1, 256, is_batchnorm=True, name='conv6') conv7 = UnetConv2D(up2, 128, is_batchnorm=True, name='conv7') conv8 = UnetConv2D(up3, 64, is_batchnorm=True, name='conv8') conv9 = UnetConv2D(up4, 32, is_batchnorm=True, name='conv9')

So, the implementation seems not to be consistent with the Figure for me. I'm totally new to attention networks, so I'm very glad about any help to understand this architecture.

Thanks :)

ghost avatar Aug 29 '22 14:08 ghost