keras-yolo2
keras-yolo2 copied to clipboard
raccoon detection accuracy
Hi
I played around the raccon a bit. It works reasonably well. I found some false positive case.
- Any chance we can improve this false positive case?I think for the negative positive, we can improve it by training a bit harder. Can we deal with the false positive more efficiently?
How many epochs did you train?
I predicted it directly with the pre-trained weights (backend is mobile-net) mobilenet_raccoon.h5 from experiencor's area. I do not know how many epoch to take to get where we are now.
Regards
@pure-water This occurs to me too. To address this, you need to add images with no raccoon to the dataset, which serve as negative examples. Better with other similarly looking animals.
How did you labeled the no-raccoon images? With Bounding box or without bounding box? When you labeled it with bbox, did you placed it around the object in the picture or somewhere?
just no bounding boxes
Does xml file then look like this with empty
Traceback (most recent call last):
File "train.py", line 102, in
A side question to @experiencor . I am fairly new to training. I used to use dark-net native frame work for training as well which use a bit different annotation syntax (csv type more or less). Here you seem to be using XML instead. Is this determined by Keras or a personal design choice? I am just wondering in the machine learning training community, is there a standard training-syntax so it will save people extra efforts when switching frame work.
You should remove tag object altogether like:
<annotation verified="yes">
<folder>images</folder>
<filename>raccoon-105.png</filename>
<path>/Users/datitran/Desktop/raccoon/images/raccoon-105.png</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>720</width>
<height>960</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
</annotation>
I think that the VOC format is most straight-forward to me: one xml annotation for one image.
When I do so and start training then we dont read the negative examples. We only read the pics which are specified in config['model']['labels'].
Due to the fact that we removed the
How to deal with this?
@bexx1963 You can disable the checking at line 55 of processing.py to include images with no labels. Still keep line 56.
@experiencor - A further clarification on the code in frontend.py regarding to the final evaluation function evaluate. Again, I trained a bit (for raccoon dataset ) and the results seems very reasonable. This is just using the data set in the github.
However, at the end of training, the validation loss is very small as well. But the mAP always shows as "0". So I spend some time debugging it a bit deeper. I found as follows :
def evaluate(...) for i in range(generator.size()): detections = all_detections[i][label] annotations = all_annotations[i][label] ... for d in detections: scores = np.append(scores, d[4]) ... overlaps = compute_overlap(np.expand_dims(d, axis=0), annotations)
It appears the model predication results (xmin,ymin,xmax,ymax) is normalized into [0,1] range however the annotation is in the image size space.
Therefore the calculation cross predication/annotation are not right the final results are always "not - overlap"
Is this a genuine issue or I missed something?
Thanks
I think I have a fix. Basically I just need to scale the predicate_box back to the whole size space by modifying the follow code
Can I fix them in the trunk?
It seems so. Can you make a PR so I can merge?
@experiencor
I make a PR request , please merge
Regards
Merged. Thanks, @pure-water.
@experiencor - Just wondering for the pre-trained weight (as mobilenet) you put on the web. How many training you did to get them ?
I don't remember.
I want to only detect one class so the best dataset for me is: labeled one classs +unlabeled without one class is it right?
When I do so and start training then we dont read the negative examples. We only read the pics which are specified in config['model']['labels'].
Due to the fact that we removed the tag in the xml file for the negative examples, these pictures have no class and then the code isnt reading them.
How to deal with this? @bexx1963 I want to add some negative dataset to train and I got this error when i changing
if len(img['object']) > -1: Traceback (most recent call last): File "train.py", line 100, inmain(args) File "train.py", line 96, in main debug = config['train']['debug']) File "/home/keras-yolo2/frontend.py", line 341, in train average_precisions = self.evaluate(valid_generator) File "/home/keras-yolo2/frontend.py", line 400, in evaluate all_annotations[i][label] = annotations[annotations[:, 4] == label, :4].copy() IndexError: index 4 is out of bounds for axis 1 with size 0 what have you done to add the negative dataset? thank you very much