tensorflow-fcn icon indicating copy to clipboard operation
tensorflow-fcn copied to clipboard

Implementation for fully connected layer.

Open eveningdong opened this issue 7 years ago • 4 comments

Hi, I was directed to here by

https://datascience.stackexchange.com/questions/12830/how-are-1x1-convolutions-the-same-as-a-fully-connected-layer

if name == 'fc6':
    filt = self.get_fc_weight_reshape(name, [7, 7, 512, 4096])
elif name == 'score_fr':
    name = 'fc8'  # Name of score_fr layer in VGG Model
    filt = self.get_fc_weight_reshape(name, [1, 1, 4096, 1000], num_classes=num_classes)
else:
    filt = self.get_fc_weight_reshape(name, [1, 1, 4096, 4096])
    conv = tf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME')

My question is on 'fc6' layer, assume at that layer, the input, which is the bottom here has shape [batch_size, 7, 7, 512], the weight matrix, which is filterhere has shape [7, 7, 512, 4096], so after tf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME'), the output, conv here should have shape [batch_size, 7, 7, 4096], even it is a 1 x 1 convolution, but it is not a fully connected layer.

eveningdong avatar May 16 '17 05:05 eveningdong

@NanqingD The last three layers of FCNs are convolution layer.

yeshenlin avatar May 16 '17 07:05 yeshenlin

@yeshenlin Hi, I made a pull request, you should understand what I am saying after you see it. As Yann LeCun says, all FC layers are convolution layers. Here should be a bug if you do some dimension test on it.

eveningdong avatar May 16 '17 13:05 eveningdong

The padding 'SAME' is desired to get a whole Image output after FCN upsampling. Yes, the output of FC6 is not equal the output of a VGG classification network. This is however by design the goal is to perform segmentation not to show howto perform classification conv-layers.

The output of our network is equal to a VGG network convoluted in sliding window fashion over the input. The center pixel of our [batch_size, 7, 7, 4096] grid is equivalent to the FC layer in classification VGG.

MarvinTeichmann avatar May 16 '17 14:05 MarvinTeichmann

Hi, I was directed to here by

https://datascience.stackexchange.com/questions/12830/how-are-1x1-convolutions-the-same-as-a-fully-connected-layer

if name == 'fc6':
    filt = self.get_fc_weight_reshape(name, [7, 7, 512, 4096])
elif name == 'score_fr':
    name = 'fc8'  # Name of score_fr layer in VGG Model
    filt = self.get_fc_weight_reshape(name, [1, 1, 4096, 1000], num_classes=num_classes)
else:
    filt = self.get_fc_weight_reshape(name, [1, 1, 4096, 4096])
    conv = tf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME')

My question is on 'fc6' layer, assume at that layer, the input, which is the bottom here has shape [batch_size, 7, 7, 512], the weight matrix, which is filterhere has shape [7, 7, 512, 4096], so after tf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME'), the output, conv here should have shape [batch_size, 7, 7, 4096], even it is a 1 x 1 convolution, but it is not a fully connected layer.

Can you please explain how the output will have [batch_size, 7, 7, 4096] and not [batch_size, 1, 1, 4096] because if the input is `[batch_size, 7, 7, 512] and filter is [7, 7, 512, 4096] then shouldn't I just get a pixel value for every "[7, 7, 512]" and this will go on for 4096 times. My understanding might be wrong but can you please calrify?

Tasmia09 avatar Dec 13 '19 15:12 Tasmia09