tf-faster-rcnn icon indicating copy to clipboard operation
tf-faster-rcnn copied to clipboard

get 0 AP

Open zdm123 opened this issue 7 years ago • 7 comments

I trained my own data set, but got 0 AP. Is anybody know what is the problem?

zdm123 avatar Oct 01 '17 03:10 zdm123

im_detect: 688/691 1.004s 0.001s im_detect: 689/691 1.008s 0.001s im_detect: 690/691 1.007s 0.001s im_detect: 691/691 1.006s 0.001s Evaluating detections Writing aeroplane VOC results file Writing bicycle VOC results file Writing bird VOC results file Writing boat VOC results file Writing bottle VOC results file Writing bus VOC results file Writing car VOC results file Writing cat VOC results file Writing chair VOC results file Writing cow VOC results file Writing diningtable VOC results file Writing dog VOC results file Writing horse VOC results file Writing motorbike VOC results file Writing person VOC results file Writing pottedplant VOC results file Writing sheep VOC results file Writing sofa VOC results file Writing train VOC results file Writing tvmonitor VOC results file VOC07 metric? Yes Reading annotation for 1/691 Reading annotation for 101/691 Reading annotation for 201/691 Reading annotation for 301/691 Reading annotation for 401/691 Reading annotation for 501/691 Reading annotation for 601/691 Saving cached annotations to /home2/sdin7632/shanshan/tf-faster-rcnn-r1.0/data/VOCdevkit2007/annotations_cache/annots.pkl AP for aeroplane = 0.0000 AP for bicycle = 0.0000 AP for bird = 0.0000 AP for boat = 0.0000 AP for bottle = 0.0000 AP for bus = 0.0000 AP for car = 0.0000 AP for cat = 0.0000 AP for chair = 0.0000 AP for cow = 0.0000 AP for diningtable = 0.0000 AP for dog = 0.0000 AP for horse = 0.0000 AP for motorbike = 0.0000 AP for person = 0.0000 AP for pottedplant = 0.0000 AP for sheep = 0.0000 AP for sofa = 0.0000 AP for train = 0.0000 AP for tvmonitor = 0.0000 Mean AP = 0.0000

Results:
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000

Results computed with the unofficial Python eval code. Results should be very close to the official MATLAB eval code. Recompute with ./tools/reval.py --matlab ... for your paper. -- Thanks, The Management

zdm123 avatar Oct 01 '17 04:10 zdm123

@zdm123 Hi~ Have you solve this problem?

YoungMagic avatar Dec 10 '17 02:12 YoungMagic

@zdm123 hi,did you solve this problem,please help me ,thank you so much!

zqdeepbluesky avatar Jan 12 '18 18:01 zqdeepbluesky

@zqdeepbluesky @YoungMagic hi,did you solve this problem?

weisq2691 avatar Apr 04 '18 04:04 weisq2691

Nope. I think it's something about the base level implementation. I changed to detectron now.

发自我的 iPhone

在 2018年4月4日,12:17,shiqingwei [email protected] 写道:

@zqdeepbluesky @YoungMagic hi,did you solve this problem?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

YoungMagic avatar Apr 04 '18 04:04 YoungMagic

@zdm123 @weisq2691 I got the same problem. Finally, I found it was because my data and label were not loaded properly. My data file is modified from pascal_voc.py, "cls = self._class_to_ind[obj.find('name').text.lower().strip()" makes the label being matched in lower case. I set the labels in "self._classes" in lower case but my labels in the annotation file are in upper case. In the evaluation stage, it cannot be matched. I solve this problem by setting "self._classes" in upper case and change "cls = self._class_to_ind[obj.find('name').text.lower().strip()" to "cls = self._class_to_ind[obj.find('name').text.strip()".

Heliang-Zheng avatar May 19 '18 07:05 Heliang-Zheng

cls = self._class_to_ind[obj.find('name').text.strip()

I got this problem for 3 days and now you come and save my days!!!!! Thank you so much!!!

gbinduo avatar Apr 18 '21 08:04 gbinduo