Naga Sandeep Ramachandruni

Results 21 comments of Naga Sandeep Ramachandruni

I have got the same problem, so i am using previous layer features instead of the layer you use generally in image_to_head, making the feature dimension double the size. In...

I have change the feature stride to 8 and added an deconv layer after net_conv ( image_to_head) to double the size of feature map. The original feature map was intended...

@ReneWu1117 I would suggest to add this after net_conv instead of doing it the build_base function which are mostly initial layers of networks. I have added slim.layers.conv2d_transpose(1024, [4,4], [2,2], scope='deconv')...

Overlaps is a two dimensional matrix. each cell shows the amount of overlap between generated box and actual groundtruth Box. If you take argmax from one axis you will get...

Before running clean your annotation files by checking these conditions. if int(ymin) > int(height): if int(ymax) > int(height): if int(xmin) >= int(xmax): if int(ymin) >= int(ymax): if int(xmin) > int(width):...

If you have a pretrained model you can replace that in data/imagenet_weights, or it will finetune from the imagenet weights.

Same Issue with me. I changed images to 224 instead of 448. Resolved it by changing this line in bcnn_finetuning.py 218. self.conv5_3 = tf.reshape(self.conv5_3,[-1,512,784]) ''' Reshape conv5_3 from [batch_size, number_of_filters,...

@YanShuo1992 I can tell for multiples of 224. if you make your image 112 i.e. 224/2 it should be 49( 196/4). if you make it 448 (224x2) it should be...

which options should I use for implementing another loss function ( hinge loss) instead of default zero-one loss. learning options. -l [0..] -> Loss function to use. 0: zero/one loss...

Me too. Please provide another source. Thanks.