pytorch-segmentation
pytorch-segmentation copied to clipboard
UNet decoder layers double conv might have logical errors
In the UNet implementation, the layers are bugged due to how x2conv
is defined.
Since the inner_channels
parameter is not given, it is calculated as inner_channels = out_channels // 2 if inner_channels is None else inner_channels
.
For instance, when initializing a decoder(in_channels=128, out_channels=64)
, the double convolutional layers will be Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
and Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
. I do not think going from 128 to 32 then to 64 channels is intended in the UNet.
An easy fix would be changing the definition of inner_channels
in x2conv
to be inner_channels = out_channels if inner_channels is None else inner_channels
or inner_channels = in_channels if inner_channels is None else inner_channels
.
If this is indeed a bug, I am surprised it was never caught. Let me know and I can make a pull request.
Hello ,I meet the same question. And I found bug in FCN and Segnet. Oh, troubled me a lot. The author didn't recurrent the model as same as the original paper ......