facenet-pytorch
facenet-pytorch copied to clipboard
Problems using RetinaFace
Hello, In order to get less false positive face detection I tried switching MTCNN with RetinaFace using this open source implementation. After extracting all faces from a given image I am passing them to FaceNet but I think I need to post-process my face images. I aligned and resized them but I am still getting strange results. Using a toy datasets (with 7 identities) and the Umap algorithm I got the following:
-
Reduced Embeddings Using MTCNN:
-
Reduced Embeddings Using RetinaFace:
As you can see, in the first plot the embeddings of different identities are well separated to clusters but when using RetinaFace this is not the case.
DId any one run into this problem? Any ideas? I already looked through the code for post processing steps (such as fixed_image_standardization
) but nothing I tried worked.
Thanks in advance!
This might help.
from PIL import Image
import numpy as np
f = Image.open('input_face_160x160.jpg')
f = np.array(f)
face_tensor = torch.tensor((f-128.0)/128.0, dtype=torch.float).permute(2, 0, 1).cuda()
embedding = resnet(face_tensor)
Normalizing retinaface output to (-1.0,1.0) range helped me, when I was facing similar issue that you have reported.
@ItamarSafriel Hi, did you manage to fix your issue ? I have a similar issue
I thought MTCNN is SOTA in face detection? I would increase the min_face_size and use higher thresholds to decrease false detection. Accept only faces with high confidence would also help.