keras-yolo2 icon indicating copy to clipboard operation
keras-yolo2 copied to clipboard

raccoon detection accuracy

Open pure-water opened this issue 6 years ago • 20 comments

Hi
I played around the raccon a bit. It works reasonably well. I found some false positive case.

  • Any chance we can improve this false positive case?I think for the negative positive, we can improve it by training a bit harder. Can we deal with the false positive more efficiently?

raccoon_test_05_detected

raccoon_test_07_detected

raccoon_test_09_detected raccoon_test_01_detected

pure-water avatar Apr 15 '18 03:04 pure-water

How many epochs did you train?

bexx1963 avatar Apr 16 '18 10:04 bexx1963

I predicted it directly with the pre-trained weights (backend is mobile-net) mobilenet_raccoon.h5 from experiencor's area. I do not know how many epoch to take to get where we are now.

Regards

pure-water avatar Apr 16 '18 12:04 pure-water

@pure-water This occurs to me too. To address this, you need to add images with no raccoon to the dataset, which serve as negative examples. Better with other similarly looking animals.

experiencor avatar Apr 16 '18 12:04 experiencor

How did you labeled the no-raccoon images? With Bounding box or without bounding box? When you labeled it with bbox, did you placed it around the object in the picture or somewhere?

bexx1963 avatar Apr 17 '18 13:04 bexx1963

just no bounding boxes

experiencor avatar Apr 17 '18 15:04 experiencor

grafik

Does xml file then look like this with empty and class "garbage"? train.py gives me an error

bexx1963 avatar Apr 17 '18 15:04 bexx1963

Traceback (most recent call last): File "train.py", line 102, in main(args) File "train.py", line 98, in main debug = config['train']['debug']) File "/media/srv-0/abajrami/YOLOv2/frontend.py", line 383, in train max_queue_size = 8) File "/home/abajrami/.local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/home/abajrami/.local/lib/python2.7/site-packages/keras/engine/training.py", line 2192, in fit_generator generator_output = next(output_generator) File "/home/abajrami/.local/lib/python2.7/site-packages/keras/utils/data_utils.py", line 584, in get six.raise_from(StopIteration(e), e) File "/home/abajrami/.local/lib/python2.7/site-packages/six.py", line 737, in raise_from raise value StopIteration: 'xmin'

bexx1963 avatar Apr 17 '18 15:04 bexx1963

A side question to @experiencor . I am fairly new to training. I used to use dark-net native frame work for training as well which use a bit different annotation syntax (csv type more or less). Here you seem to be using XML instead. Is this determined by Keras or a personal design choice? I am just wondering in the machine learning training community, is there a standard training-syntax so it will save people extra efforts when switching frame work.

pure-water avatar Apr 18 '18 00:04 pure-water

You should remove tag object altogether like:

<annotation verified="yes">
	<folder>images</folder>
	<filename>raccoon-105.png</filename>
	<path>/Users/datitran/Desktop/raccoon/images/raccoon-105.png</path>
	<source>
		<database>Unknown</database>
	</source>
	<size>
		<width>720</width>
		<height>960</height>
		<depth>3</depth>
	</size>
	<segmented>0</segmented>
</annotation>

I think that the VOC format is most straight-forward to me: one xml annotation for one image.

experiencor avatar Apr 18 '18 14:04 experiencor

When I do so and start training then we dont read the negative examples. We only read the pics which are specified in config['model']['labels'].

Due to the fact that we removed the tag in the xml file for the negative examples, these pictures have no class and then the code isnt reading them.

How to deal with this?

bexx1963 avatar Apr 19 '18 10:04 bexx1963

@bexx1963 You can disable the checking at line 55 of processing.py to include images with no labels. Still keep line 56.

experiencor avatar Apr 19 '18 14:04 experiencor

@experiencor - A further clarification on the code in frontend.py regarding to the final evaluation function evaluate. Again, I trained a bit (for raccoon dataset ) and the results seems very reasonable. This is just using the data set in the github.

However, at the end of training, the validation loss is very small as well. But the mAP always shows as "0". So I spend some time debugging it a bit deeper. I found as follows :

def evaluate(...) for i in range(generator.size()): detections = all_detections[i][label] annotations = all_annotations[i][label] ... for d in detections: scores = np.append(scores, d[4]) ... overlaps = compute_overlap(np.expand_dims(d, axis=0), annotations)

It appears the model predication results (xmin,ymin,xmax,ymax) is normalized into [0,1] range however the annotation is in the image size space.

Therefore the calculation cross predication/annotation are not right the final results are always "not - overlap"

Is this a genuine issue or I missed something?

Thanks

pure-water avatar Apr 22 '18 16:04 pure-water

I think I have a fix. Basically I just need to scale the predicate_box back to the whole size space by modifying the follow code

Can I fix them in the trunk?

image

pure-water avatar Apr 23 '18 13:04 pure-water

It seems so. Can you make a PR so I can merge?

experiencor avatar Apr 23 '18 13:04 experiencor

@experiencor

I make a PR request , please merge

Regards

pure-water avatar Apr 24 '18 15:04 pure-water

Merged. Thanks, @pure-water.

experiencor avatar Apr 26 '18 12:04 experiencor

@experiencor - Just wondering for the pre-trained weight (as mobilenet) you put on the web. How many training you did to get them ?

pure-water avatar Apr 26 '18 15:04 pure-water

I don't remember.

experiencor avatar Apr 28 '18 05:04 experiencor

I want to only detect one class so the best dataset for me is: labeled one classs +unlabeled without one class is it right?

jzx-gooner avatar Sep 27 '18 03:09 jzx-gooner

When I do so and start training then we dont read the negative examples. We only read the pics which are specified in config['model']['labels'].

Due to the fact that we removed the tag in the xml file for the negative examples, these pictures have no class and then the code isnt reading them.

How to deal with this? @bexx1963 I want to add some negative dataset to train and I got this error when i changing
if len(img['object']) > -1: Traceback (most recent call last): File "train.py", line 100, in main(args) File "train.py", line 96, in main debug = config['train']['debug']) File "/home/keras-yolo2/frontend.py", line 341, in train average_precisions = self.evaluate(valid_generator) File "/home/keras-yolo2/frontend.py", line 400, in evaluate all_annotations[i][label] = annotations[annotations[:, 4] == label, :4].copy() IndexError: index 4 is out of bounds for axis 1 with size 0 what have you done to add the negative dataset? thank you very much

jzx-gooner avatar Jan 04 '19 13:01 jzx-gooner