keras-yolo2 icon indicating copy to clipboard operation
keras-yolo2 copied to clipboard

0 Boxes detected always

Open kubasienki opened this issue 6 years ago • 3 comments

Hi, I'have a problem with training network with my own dataset. I have even tried with white blobs on black background and still netowk is unable to detect anything.

{
    "model" : {
        "backend":              "Full Yolo",
        "input_size":           416,
        "anchors":              [0.29,0.11, 1.83,0.57, 3.79,1.31, 7.82,2.63, 13.70,5.09],
        "max_box_per_image":    10,
        "labels":               ["Rock"]
    },

    "train": {
        "train_image_folder":   "----",
        "train_annot_folder":   "----",

        "train_times":          3,
        "pretrained_weights":   "full_yolo_racoon.h5",
        "batch_size":           16,
        "learning_rate":        1e-4,
        "nb_epochs":            50,
        "warmup_epochs":        3,

        "object_scale":         1.0 , (or 5.0 doesn't matter)
        "no_object_scale":      1.0,
        "coord_scale":          1.0,
        "class_scale":          1.0,

        "saved_weights_name":   "full_yolo_rock_2.h5",
        "debug":                true
    },

    "valid": {
        "valid_image_folder":   "",
        "valid_annot_folder":   "",

        "valid_times":          1
    }
}

My image when training on real data: image

My image in try with blobs image

My annotation:

 <annotation>
	<folder>Rocks</folder>
	<filename>0085.png</filename>
	<path>-----/0085.png</path>
	<source>
		<database>Unknown</database>
	</source>
	<size>
		<width>540</width>
		<height>960</height>
		<depth>3</depth>
	</size>
	<segmented>0</segmented>
		
<object>
<name>Rock</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
    <xmin>433</xmin>
    <ymin>17</ymin>
    <xmax>486</xmax>
    <ymax>53</ymax>
</bndbox>
</object>
<object>
<name>Rock</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
    <xmin>489</xmin>
    <ymin>40</ymin>
    <xmax>497</xmax>
    <ymax>47</ymax>
</bndbox>
</object>
<object>
<name>Rock</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
    <xmin>358</xmin>
    <ymin>75</ymin>
    <xmax>623</xmax>
    <ymax>300</ymax>
</bndbox>
</object>
<object>
<name>Rock</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
    <xmin>750</xmin>
    <ymin>174</ymin>
    <xmax>939</xmax>
    <ymax>234</ymax>
</bndbox>
</object>
<object>
<name>Rock</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
    <xmin>587</xmin>
    <ymin>227</ymin>
    <xmax>655</xmax>
    <ymax>289</ymax>
</bndbox>
</object>
</annotation> 

kubasienki avatar Jun 13 '18 17:06 kubasienki

Just found out that if I stop training when loss in relatively high it detects objects but a lot of boxes at one object

kubasienki avatar Jun 13 '18 17:06 kubasienki

@kubasienki Have you found any solution?

getsanjeev avatar Jan 23 '19 09:01 getsanjeev

Probably. My guess is that yolo architecture works on grid for which my objects are too small. But that's just my guess.

kubasienki avatar Jan 29 '19 09:01 kubasienki