Pytorch_Retinaface
Pytorch_Retinaface copied to clipboard
Identifying small faces in large images
I'm trying to retrain retinaface with a custom dataset. My images are of resolution 1920x1080. The average width and height of faces in the images are ~20 pixels. I have around 10k images for training. So far, the model is not able to identify faces. Is there any preprocessing i can do like resizing and cropping which can help in improving the detection accuracy?
If you know they are always going to be a small face in large image, just don't resize the image smaller. Do detection at full size and you can see 13px faces. If there's a possibility of having large and small faces, you must develop a multi-scale method (resize + non-resize), merge results using NMS.
Thanks a lot for the prompt reply!
My plan was to resize the original image to twice its size and crop it into 4 pieces and use the cropped images for training, so that the bounding boxes become larger in an image. Can u suggest if this will help in training to identify small faces?
Also, can you explain the multi-scale method & merging using NMS? Do you mean to enlarge regions of interest and predict on it and merge results with original size image? This could help in inference, but my problem is that the model is not learning for small size images during training.
Any references/keywords to look for would be of great help
Thanks!
@ashlinghosh Hello, can I ask you how to use your own data set to generate a data set in a format like widerFace. It is very urgent. It seems that no one maintains this code?