Christopher

Results 28 comments of Christopher

one_hot = np.zeros ((N,k)) one_hot[np.arange (N),assignments]=1 nb_per_cluster=np.sum (one_hot,axis=0)

Sounds the clusters are well balanced. Now a new question arises: in annotation_dims, how have your values w,h been normalized ?If you normalize by dividing box_width by image_width and box_height...

So, now you have your answer: since images are resized to 416x416 in some Yolo implementations without preserving the ratio, it is normal that you get some rectangles since squares...

When you compute the new centroids in https://github.com/Jumabek/darknet_scripts/blob/master/gen_anchors.py#L98-L102 you get values between 0 and 1, [[ 0.22046375 0.34864097] [ 0.06277104 0.10153069] [ 0.15715894 0.21793756] [ 0.3867708 0.50731473] [ 0.70217017 0.86072544]]...

Yes, you multiply them back by image_width or image_height depending it is width or height, and then devides by np.max( image_height, image_width) Then, you'll get squares for sure. But be...

It works by default. Here I'm just telling you how to check to get square anchors: this is because your boxes which looks square are not if you consider them...

if you just re initialize the last layer weights, it is transfer learning

If you want the ratio to be preserved, you divide the dimensions by np.max( image_height, image_width) or even you don't normalize them, you keep them in pixels but then you...

to my point of view, your clusters are well balanced, so that's ok

I was just answering why you did not get square anchor boxes: this is due to the resize of the image to a square