FaceCycle
FaceCycle copied to clipboard
Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021)
Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021)
This repository contains the code for our ICCV2021 paper by Jia-Ren Chang, Yong-Sheng Chen, and Wei-Chen Chiu.
Contents
- Introduction
- Results
- Usage
- Contacts
Introduction
In this work, we introduce cycle-consistency in facial characteristics as free supervisory signal to learn facial representations from unlabeled facial images. The learning is realized by superimposing the facial motion cycle-consistency and identity cycle-consistency constraints. The main idea of the facial motion cycle-consistency is that, given a face with expression, we can perform de-expression to a neutral face via the removal of facial motion and further perform re-expression to reconstruct back to the original face. The main idea of the identity cycle-consistency is to exploit both de-identity into mean face by depriving the given neutral face of its identity via feature re-normalization and re-identity into neutral face by adding the personal attributes to the mean face.

Results
More visualization

Emotion recognition
We use linear protocol to evaluate learnt representations for emotion recognition. We report accuracy (%) for two dataset.
Method | FER-2013 | RAF-DB |
---|---|---|
Ours | 48.76 % | 71.01 % |
FAb-Net | 46.98 % | 66.72 % |
TCAE | 45.05 % | 65.32 % |
BMVC’20 | 47.61 % | 58.86 % |
Head pose regression
We use linear regression to evaluate learnt representations for head pose regression.
Method | Yaw | Pitch | Roll |
---|---|---|---|
Ours | 11.70 | 12.76 | 12.94 |
FAb-Net | 13.92 | 13.25 | 14.51 |
TCAE | 21.75 | 14.57 | 14.83 |
BMVC’20 | 22.06 | 13.50 | 15.14 |
Person recognition
We directly adopt learnt representation for person recognition.
Method | LFW | CPLFW |
---|---|---|
Ours | 73.72 % | 58.52 % |
VGG-like | 71.48 % | - |
LBP | 56.90 % | 51.50 % |
HoG | 62.73 % | 51.73 % |
Frontalization
The frontalization results from LFW dataset.

Image-to-image Translation
The image-to-image translation results.

Usage
From Others
Thanks to all the authors of these awesome repositories. SSIM Optical Flow Visualization
Download Pretrained Model
Test translation
python test_translation.py --loadmodel (pretrained model) \
and you can get like below

Replicate RAF-DB results
Download pretrained model and RAF-DB
python RAF_classify.py --loadmodel (pretrained model) \
--datapath (your RAF dataset path) \
--savemodel (your path for saving)
You can get 70~71% accuracy with basic emotion classification (7 categories) using linear protocol.
Contacts
Any discussions or concerns are welcomed!