MTCNN-Tensorflow
MTCNN-Tensorflow copied to clipboard
68点为RNet gen_hard_example 耗时严重变长的问题
您好,我在用您的网络,将landmark改为68点时,为RNet生成bbx的过程耗时严重增加: 1 out of 1000 images done 0.025541 seconds for each image 2 out of 1000 images done 187.113974 seconds for each image 3 out of 1000 images done 205.598158 seconds for each image 4 out of 1000 images done 46.019651 seconds for each image 5 out of 1000 images done 37.846596 seconds for each image 6 out of 1000 images done 60.305767 seconds for each image 7 out of 1000 images done 77.391738 seconds for each image 8 out of 1000 images done 39.144061 seconds for each image 9 out of 1000 images done 36.026902 seconds for each image 10 out of 1000 images done 56.116209 seconds for each image 经过计时后发现,时间主要耗费在如下两个地方: (1)detect_PNet 中的 while 循环 while min(current_height, current_width) > net_size: count = count + 1 # return the result predicted by pnet # cls_cls_map : Hw2 # reg: Hw4 # class_prob andd bbox_pred cls_cls_map, reg = self.pnet_detector.predict(im_resized) # boxes: num*9(x1,y1,x2,y2,score,x1_offset,y1_offset,x2_offset,y2_offset) boxes = self.generate_bbox(cls_cls_map[:, :, 1], reg, current_scale, self.thresh[0]) # scale_factor is 0.79 in default current_scale *= self.scale_factor im_resized = self.processed_image(im, current_scale) current_height, current_width, _ = im_resized.shape
if boxes.size == 0:
continue
# get the index from non-maximum s
keep = py_nms(boxes[:, :5], 0.5, 'Union')
boxes = boxes[keep]
all_boxes.append(boxes)
(2)detect_PNet 中的 merge the detection from first stage bobo1=time.time() keep = py_nms(all_boxes[:, 0:5], 0.7, 'Union') all_boxes = all_boxes[keep] boxes = all_boxes[:, :5]
bbw = all_boxes[:, 2] - all_boxes[:, 0] + 1
bbh = all_boxes[:, 3] - all_boxes[:, 1] + 1
bobo2=time.time()
print('"merge the detection from first stage" consume time :',bobo2-bobo1)
这两个地方我不是很明白,如果您方便的话,还请多多指教,多谢了。
可以请问下怎么修改成检测68个关键点吗