[BUG]: Position of the eyes is given even if the eyes are not present in the image
Before You Report a Bug, Please Confirm You Have Done The Following...
- [x] I have updated to the latest version of the packages.
- [x] I have searched for both existing issues and closed issues and found none that matched my issue.
DeepFace's version
0.0.93
Python version
3.11
Operating System
WSL on Windows
Dependencies
absl-py==2.2.1 altair==5.5.0 annotated-types==0.7.0 anyio==4.9.0 astunparse==1.6.3 attrs==25.3.0 beautifulsoup4==4.13.3 blinker==1.9.0 cachetools==5.5.2 certifi==2025.1.31 charset-normalizer==3.4.1 click==8.1.8 contourpy==1.3.1 cycler==0.12.1 deepface==0.0.93 distro==1.9.0 exceptiongroup==1.2.2 faiss-gpu==1.7.2 filelock==3.18.0 fire==0.7.0 Flask==3.1.0 flask-cors==5.0.1 flatbuffers==25.2.10 fonttools==4.56.0 gast==0.6.0 gdown==5.2.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.38.0 google-auth-oauthlib==1.2.1 google-pasta==0.2.0 gptree==0.1 gptree-cli==1.4.0 grpcio==1.71.0 gunicorn==23.0.0 h11==0.14.0 h5py==3.13.0 httpcore==1.0.7 httpx==0.28.1 idna==3.10 itsdangerous==2.2.0 Jinja2==3.1.6 jiter==0.9.0 joblib==1.4.2 jsonschema==4.23.0 jsonschema-specifications==2024.10.1 keras==2.15.0 kiwisolver==1.4.8 libclang==18.1.1 lz4==4.4.3 Markdown==3.7 markdown-it-py==3.0.0 MarkupSafe==3.0.2 matplotlib==3.10.1 mdurl==0.1.2 ml-dtypes==0.2.0 mtcnn==1.0.0 namex==0.0.8 narwhals==1.33.0 numpy==1.26.4 oauthlib==3.2.2 openai==1.70.0 opencv-python==4.11.0.86 opt_einsum==3.4.0 optree==0.14.1 packaging==24.2 pandas==2.2.3 pathspec==0.12.1 pillow==11.2.0 plotly==6.0.1 protobuf==4.25.6 pyarrow==19.0.1 pyasn1==0.6.1 pyasn1_modules==0.4.2 pydantic==2.11.1 pydantic_core==2.33.0 pydeck==0.9.1 Pygments==2.19.1 pyparsing==3.2.3 pyperclip==1.9.0 PySocks==1.7.1 python-dateutil==2.9.0.post0 python-dotenv==1.1.0 pytz==2025.2 referencing==0.36.2 requests==2.32.3 requests-oauthlib==2.0.0 retina-face==0.0.17 rich==14.0.0 rpds-py==0.24.0 rsa==4.9 scikit-learn==1.6.1 scipy==1.15.2 six==1.17.0 smmap==5.0.2 sniffio==1.3.1 soupsieve==2.6 streamlit==1.44.1 tenacity==9.0.0 tensorboard==2.15.2 tensorboard-data-server==0.7.2 tensorflow==2.15.0 tensorflow-estimator==2.15.0 tensorflow-io-gcs-filesystem==0.37.1 termcolor==3.0.0 threadpoolctl==3.6.0 toml==0.10.2 tornado==6.4.2 tqdm==4.67.1 typing-inspection==0.4.0 typing_extensions==4.13.0 tzdata==2025.2 urllib3==2.3.0 watchdog==6.0.0 Werkzeug==3.1.3 wrapt==1.14.1
Reproducible example
def is_valid_face(face_dict, eye_alignment_threshold=200):
"""
Basic validation using eye landmarks from DeepFace's output.
Checks for presence and vertical alignment of eyes.
"""
facial_area = face_dict.get("facial_area", {})
left_eye_tuple = facial_area.get("left_eye", None)
right_eye_tuple = facial_area.get("right_eye", None)
# Check if both eye tuples exist and are sequences of at least length 2
if not (isinstance(left_eye_tuple, (list, tuple)) and len(left_eye_tuple) >= 2 and
isinstance(right_eye_tuple, (list, tuple)) and len(right_eye_tuple) >= 2):
# print("DEBUG: is_valid_face failed: Missing or invalid eye tuples.")
return False
# Check vertical alignment
try:
y_diff = abs(float(left_eye_tuple[1]) - float(right_eye_tuple[1]))
if y_diff >= eye_alignment_threshold:
# print(f"DEBUG: is_valid_face failed: Eye alignment diff {y_diff:.1f} >= {eye_alignment_threshold}")
return False
except (ValueError, TypeError) as e:
print(f"⚠️ Error calculating eye y_diff in is_valid_face: {e}. Assuming invalid.")
return False # Invalid if coordinates aren't numbers
return True # Passed both checks
Relevant Log Output
No response
Expected Result
To have "None" in left / right eye if the face found is cut with eyes out of screen (qhen only the nose and mouth are present)
What happened instead?
Even if there are no eyes, i get a tuple in facial_area eyes parameters.
Additional Info
No response
Don't understand anythinng, would you please explain the bug more clear?
Hi serengil,
Sorry for the confusion. Let me try to explain the issue more clearly.
The Problem:
The deepface library sometimes returns coordinate tuples for left_eye and right_eye within the facial_area dictionary, even when the detected face region in the image does not actually contain visible eyes.
Example Scenario:
Imagine an image where only the lower half of a person's face is visible (e.g., only the nose and mouth are in the frame, the eyes are cut off). DeepFace might correctly identify this lower portion as part of a face. However, the issue is that in the results dictionary for this detection, the facial_area still contains values like {'left_eye': (x1, y1), 'right_eye': (x2, y2)} instead of indicating that the eyes were not found in this specific detected region.
Expected Behavior:
When DeepFace detects a facial area but cannot find the eyes within that specific area (because they are obscured, cut off by the image boundary, or otherwise not detected), the left_eye and right_eye keys in the facial_area dictionary should ideally have a value of None (or some other clear indicator that the eyes were not located).
Actual Behavior:
Currently, even if the eyes are not visually present in the detected facial_area, DeepFace still provides coordinate tuples for left_eye and right_eye. These coordinates might be inaccurate or nonsensical in this context.
Why This is a Problem:
This makes it difficult to reliably determine if a detected face actually includes visible eyes. My is_valid_face function (provided in the original report) tries to use the presence and alignment of these eye coordinates for validation. However, this logic fails when coordinates are returned even for faces where no eyes are actually present in the detected zone, as it falsely assumes eyes were found.
In short: I expect facial_area['left_eye'] and facial_area['right_eye'] to be None if the eyes are not detected within the bounds of the facial_area, but instead, I am receiving coordinate tuples.
I hope this explanation is clearer. Thanks for looking into it!
can you share an example please?
For example, in this face extracted with RetinaFace from a video, I expect left_eye and right_eye to be None.
As well as in this image.
No, I mean what is the returning json payload for these, and what you want to see instead.
facial_area = {'x': 1694, 'y': 0, 'w': 212, 'h': 258, 'left_eye': (1839, -81), 'right_eye': (1743, -79)}
for
facial_area = {'x': 546, 'y': 61, 'w': 273, 'h': 440, 'left_eye': (777, 226), 'right_eye': (757, 210)}
for
facial_area = {'x': 1694, 'y': 0, 'w': 212, 'h': 258, 'left_eye': (1839, -81), 'right_eye': (1743, -79)} -> here, agreed coordinates of eyes cannot be negative, we can overwrite them to None
facial_area = {'x': 546, 'y': 61, 'w': 273, 'h': 440, 'left_eye': (777, 226), 'right_eye': (757, 210)} -> here x and y coordinates are valid range of face 546 < 777 < 546 + 273 . So, we will not do anything for this case.
I’d like to clarify an issue I’m encountering related to non-frontal face images and the resulting embeddings.
In the second image, for instance, the left eye is not visible at all—it’s entirely obscured by the nose or the angle of the other eye, so why does the facial_area contain data for the eye that is not visible?
My main concern is with filtering out these non-frontal images because they negatively impact the embedding process. Specifically, when using the facent512 embedder, non-frontal faces tend to produce very similar embeddings, which causes the model to focus more on pose and alignment than on actual facial features.
As a result, I’ve observed problematic clustering behavior: for example, I have one cluster of a Black male and another of a White female (the one above in the photo). However, when their faces are captured at a similar non-frontal angle, their images are mistakenly grouped into a third, incorrect cluster. This seems to be due to the model interpreting pose similarity as identity similarity.
To avoid this, I’m looking for a way to automatically discard or filter out images where the face is not sufficiently frontal—particularly those where key features like both eyes are not visible. Any suggestions or best practices on how to approach this would be greatly appreciated.
For example this is the cluster, while I have the frontal faces of the people in the photos that are correctly grouped in other clusters
I understand your case but deepface should be for generalized use cases not only for your use case.
Seems detectors are thinking positions of the eyes as 3D, so it is normal to have almost same coordinates for left and right eye. We will not take any action for this.
On the other hand, e.g. (1839, -81) is not a valid eye coordinate. We should overwrite this to None.
Hey Serengil, can I add atleast a landmark validity check before returning eye coordinates? or Sanitize facial landmark coordinates (like eye, nose, mouth) before returning If coordinate is off-image (e.g. negative or far beyond image dimensions) -> Set it to None. like you mentioned
@RUTUPARNk feel free do anything