tensorflow_compact_bilinear_pooling icon indicating copy to clipboard operation
tensorflow_compact_bilinear_pooling copied to clipboard

When I put two same tensors into this function, the output_dim of the result is "?"

Open tigereatsheep opened this issue 7 years ago • 5 comments

Why?

tigereatsheep avatar Nov 20 '17 07:11 tigereatsheep

I met the same situation.

GukehAn avatar Jun 05 '18 06:06 GukehAn

You can explicitly set the (static) output shapes using Tensor.set_shape.

ronghanghu avatar Jun 05 '18 13:06 ronghanghu

Here is my solution.

line 147 of compact_bilinear_pooling.py

`
# output_shape = tf.add(tf.multiply(tf.shape(bottom1), [1, 1, 1, 0]), # [0, 0, 0, output_dim]) # cbp = tf.reshape(cbp_flat, output_shape)

outputHeight, outputWidth = bottom1.shape.as_list()[1:3]
cbp = tf.reshape(cbp_flat, [-1, outputHeight, outputWidth, output_dim])

`

I don't know if this is correct.

GukehAn avatar Jun 07 '18 09:06 GukehAn

Dear friend: Thank you for your work,I tried to call this function to reproduce the paper, but the loss (cost function) has been very large during the training, and there is no tendency to decrease. It may be due to divergence. Can you help me see what is wrong? self.cbp = compact_bilinear_pooling_layer(self.conv5_3, self.conv5_2, 16000, sum_pool=True) In the implementation process, I use Vgg 16 conv5_2, conv5_3 as the input of bottom1 and bottom2, and then pass the obtained self.cbp directly to the full-connect layer softmax classifier. But the loss of the training set and the validation set has been very large and can't converge. Can you tell me if there are some missing steps in the function process? I use random gradient descent to optimize the final prediction value and the cross entropy of the label. The batchsize is 32.

JUSTDODoDo avatar Dec 06 '18 08:12 JUSTDODoDo

can you help me ?

JUSTDODoDo avatar Dec 06 '18 08:12 JUSTDODoDo