akarshzingade
akarshzingade
Well, the weight file linked in this repository is 275MB. I am not sure about the orientation part.
Maybe you can use different architectures for YOLOv2- like tinyYOLO. It has a lower memory footprint. Once you get the bounding box for your logo, you can use orientation detection...
Hey Mike and Longzeyilang! Sub-sampling is same as strides. "subsample: tuple of length 2. Factor by which to subsample output. Also called strides elsewhere."- https://faroit.github.io/keras-docs/1.2.2/layers/convolutional/#convolution2d
Hey, Longzeyilang. I believe the implementation does follow the architecture shown in Figure 3. Please let me know what the difference is :)
Hey Longzeyilang! 1) Do you mean the parallel smaller network with VGG? If yes, I do have implemented it :) 2) The pairwise relevance score is hand crafted. Unfortunately, they...
Yes, that is correct! I was supposed to change this. But, somehow forgot about it! Thank you for pointing that out :)
So, the model is relying on the color too much?
I think Inception is less sensitive to colour. You could try that. I haven't found any article/paper that shows the colour sensitivity of VGG. The closest I have found is...
50 per class is fine I think. It's the number of triplets per query image that matters. I would say 50 triplets per query image and positive image pair.
using "--num_pos_images 10 --num_neg_images 40" will create 40*10 triplets per query. What I meant to say was 50 negative images per query image and positive image pair. But, this will...