facial_expression icon indicating copy to clipboard operation
facial_expression copied to clipboard

A model to detect Facial Expression

Project Overview

This project aims to develop a model to detect Facial Expressions of human faces on any custom image or video. The final developed application would have the capability to detect faces in any custom images/video and comment on the facial expression on each of the faces.

About the Dataset

The model is being trained using the AffectNet Dataset which contains about 420k images manually annotated for the presence of of seven discrete facial expressions (Categorical Model) and the Intensity of Valence and Arousal (Dimensional Model). AffectNet provides :

  • Images of the faces
  • Location of the faces in the images
  • Location of 68 facial landmarks
  • Eleven emotion and non-emotion categories labels
  • Valence and arousal values of the facial expressions in continuous domain

Emotion Categories: Eleven annotated emotions are provided for images and indexed as follows: 0: Neutral, 1: Happiness, 2: Sadness, 3: Surprise, 4: Fear, 5: Disgust, 6: Anger, 7: Contempt, 8: None, 9: Uncertain, 10: No-Face

Detailed informaion regarding the Dataset can be found by visiting website AffectNet Official Website