2015_Face_Detection
2015_Face_Detection copied to clipboard
Could you provide your training errors on each network?
Hi, thanks for your nice code. Recently I'm trying to implement the algorithm using Caffe. Could you provide your training errors on 12/24/48 net/calibration net each so I can check whether my code is correct?
@windpls 12net train-error about 5% 24net train-error about 2% 48net train-error about 1% 12netc train-error about 50% 24netc train-error about 30% 48netc train-error about 20% I am sorry. I didn't remember the number clear.
hi, @layumi , thanks for your reply.. But I have another two questions:
- Did you do data augmentation for positive samples? I tried some methods like mirroring the image, jittering the bounding boxes, but all these strategies increased the validation error when training 12net, resulting in an validation accuracy about 92.7%, comparing with 99.0% without data augmentation.
- How did you decide the thresholds of each nets? The paper suggests that the detector should get 99% recall rate on the subset of AFLW after 12net and 12 calibration net. So I leaved out 10% images of AFLW. But the way to measure the recall rate confused me. Do I need to use multi-scale detection? And is the NMS needed?
Expect your reply.
@windpls 1.I use mirroring the image and adding dropout 2. I test model on fddb and tune the threshold according to the recall rate in paper. after 12net 12netc 94% after 24net 24netc 89% after 48net 48netc 85%
@layumi I passed an image through your 12net and 12netc network and found that lots of false alarm boxes were remained. The paper said that only 2.36% detection windows passed the 12-net and 12-c net, so it seems that there is a gap between your implementation and the author's 12net, isn't it?
@windpls after 12net 12netc and nms it remains average about 1000(this number I don't remember clearly) after 24net 24netc and nms it remains average about 100(this number I don't remember clearly) after 48net 48netc it remains average about 30 after 0.25 nms 4 windows left.