facenet icon indicating copy to clipboard operation
facenet copied to clipboard

Bounding box is inaccurate

Open darshg321 opened this issue 1 year ago • 2 comments

I have this code: `boxes, probs = mtcnn.detect(rgb_frame)

if boxes is not None:
    for box, prob in zip(boxes, probs):
        if prob < min_probability:
            continue
        x, y, w, h = box.astype(int)
        
        face = rgb_frame[y:y+h, x:x+w]

        tensor_image = mtcnn(face)
        

        face_embedding = facenet_model(tensor_image.unsqueeze(0)).detach().numpy()[0]
        
        match = face_matching(face_embedding, embeddings, 0.4)
        if match:
            print('Match found')
            cv2.rectangle(frame, (x, y), ((x+w), (y+h)), (0, 255, 0), 2)
        else:
            cv2.rectangle(frame, (x, y), ((x+w), (y+h)), (0, 0, 255), 2)`

When I try to draw a rectange around the faces using the bounding box coordinates, the top left is accurate, but the height and width of the boxes are way bigger than the face, and they both sometimes are randomly way longer than they should be. Why is this happening? I am not using any parameters in the MTCNN object.

darshg321 avatar Apr 19 '23 02:04 darshg321

The issue you're facing with the bounding box coordinates, where the height and width of the boxes are larger than the actual face and sometimes vary randomly, might be due to the difference in the input image sizes between MTCNN and the face recognition model.

MTCNN expects the input image size to be relatively small (typically a few hundred pixels), while the face recognition model may require a larger input image size (e.g., 160x160 pixels) for accurate feature extraction. When you extract the face region using the bounding box coordinates from MTCNN, it may result in a face image that doesn't match the size expectation of the face recognition model.

To resolve this issue, you need to ensure that the face region extracted from MTCNN is resized to match the input size required by the face recognition model. You can use OpenCV's resize function or other image resizing methods to resize the face region before feeding it into the face recognition model.

I hope this will solve the issue. If you need any more help feel free to reply.

Devang-C avatar Jun 22 '23 11:06 Devang-C

I am facing the same issue. I am detecting the faces and saving the cropped face image on local drive to further train a facial recognition model. But the bounding box is only accurate on top left corner. Other corners are expanded. Below is my code: ` from facenet_pytorch import MTCNN import cv2 import os

mtcnn = MTCNN(keep_all=True, device='cuda:0') dir_path = 'assets/raw/ammar' save_path = 'assets/face_raw/ammar'

files = os.listdir(dir_path)

for file in files: img = cv2.imread(dir_path+"/"+file, cv2.COLOR_BGR2RGB) boxes, _ = mtcnn.detect(img)

x = int(boxes[0][0])
y = int(boxes[0][1])
w = int(boxes[0][2])
h = int(boxes[0][3])

crop_img = img[y:y+h, x:x+w]

cv2.imwrite(save_path+"/"+file, crop_img)

`

I'm stuck here and cannot find a solution.

ammar3010 avatar Sep 20 '23 13:09 ammar3010