3DFaceRecon
3DFaceRecon copied to clipboard
Re-implementation: "[CVPR 2017] Learning Detailed Face Reconstruction from a Single Image".
Re-implementation Work of 3D Face Reconstruction
This repo is the re-implementation work of the paper Learning Detailed Face Reconstruction from a Single Image (CVPR2017).
Requirement
- python == 3.5.x (with opencv-python, scipy, etc. Anaconda==3.4.x is recommended.)
- TensorFlow == 1.2.0 (with CUDA8.0 and cudnn5.1).
- gcc == 4.8.x (for compiling rendering layer).
- Currently, the codes are tested only on the Linux platform with environment above.
Data Preparation
-
Download the images URL files from VGGFace. Then extract them into the
./data
folder.To download all the images of VGGFace, you can use the
download_vggface.py
in the folder./data
:cd ./data python download_vggface.py ./vgg_face_dataset/files # This process may take long time.
-
Compile the Zbuffer Lib within the folder
./prepare_data/ZBuffer
by running the script:compile.m
Note: you may need to setup mex compiler and modify the directories of opencv accordingly if errors occured.
You can also run the MATLAB script
./test_zbuffer.m
to test the generated lib file, i.e.,ZBufferC.mexa64
. -
Prepare 3DMM facial model.
You should firstly download the basic BFM model from 3DDFA, where the following
.mat
files are needed in the./3dmm
folder:01_MorphableModel.mat Model_Expression.mat model_info.mat vertex_code.mat
You can also download them from Baidu Cloud or Google Drive.
Then, run the MATLAB script
./prepare_data/script_ModelGenerate.m
to generateModel_Shape.mat
file in the./3dmm
folder. -
Ground truth generation. The matlab codes
./prepare_data/script_generate_dataset.m
are used for generating training data. You should modify the input and output directories accordingly. As a result, facial images and text labels should be placed in folder./data/vggface
.Only the cropped facial images and text labels are used in this project. The 235-dimensional param vecters are saved in the
labels
folder.
- dim 1-7: pose parameters ([phi; gamma; theta; t3d_x; t3d_y; t3d_z; f];)
- dim 7+1 --> 7+199: shape parameters.
- dim 7+199+1 --> 7+199+29: expression parameters.
-
Prepare training dataset. Just run the following python scripts
# split training dataset into train val and test splits. cd ./prepare_data python script_split_dataset.py # compute the mean value for training. python script_compute_mean.py
Installation
-
Compiling the custom op with tensorflow.
cd ./rendering_layer sh ./compile.sh cd ..
Make sure no errors happened in this compiling process and the file
render_depth.so
is generated under the folder./rendering_layer/ops_src/
-
Simple Test. You can use the provided python script
sample_test.py
to test therender_depth.so
.cd ./rendering_layer python sample_test.py cd ..
Train and Test
-
Download the pretrained ImageNet model files, i.e.,
resnet_v1_101.ckpt
andvgg_16.ckpt
.cd ./pretrained sh download_imagenet_models.sh
-
For training, just run the shell script
./run_experiment.sh
directly, or you can modify several input args before running.To visualize the training process, you can use the tensorboard tool:
cd ./output/tensorboard tensorboard --logdir=./ --port=6710
Disclaimer
- Currently, this repo has not been well improved to completely get exactly the same results of the paper.
- Parts of the C++ codes related to the zbuffer rendering are referenced from 3DDFA.
Citation
If you find this implementation helpful to your research paper, please consider citing:
@article{Richardson_CVPR2017,
Author = {E. Richardson and M. Sela and R. Or-El and R. Kimmel},
Title = {Learning Detailed Face Reconstruction from a Single Image},
booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
Year = {2017}
}