TF-SegNet
TF-SegNet copied to clipboard
Why is 'activation=False' in decoder?
Hi, I'm studying on SegNet and your codes are excellent for understanding the whole structure of SegNet. While reading your codes, I felt a little confused about the 35th line in inference.py. Could you please tell me, why did you set activation=False
in conv_decode4 = conv_layer_with_bn(initializer, unpool_4, [7, 7, 64, 64], is_training, False, name="conv_decode4")
and other conv_layer_with_bn
in decoder part ? Thank you very much!
Honestly, I'm not quite sure and I remember wondering about this as well when I implemented the network. Let me look into it a bit and maybe I can give you an answer :)
From the segnet paper I found this in the first paragraph of chapter 3: «No ReLU non-linearity is used in the decoder unlike the deconvolution network [41, 42]. This makes it easier to optimize the filters in each pair.» I'm not sure exactly why, but at least it means that it has been tested and shown to work better I guess.
Thank you!