EXE-GAN
EXE-GAN copied to clipboard
Facial image inpainting is a task of filling visually realistic and semantically meaningful contents for missing or masked pixels in a face image. This paper presents EXE-GAN, a novel diverse and inte...
Do Inpainting Yourself: Generative Facial Inpainting Guided by Exemplars (EXE-GAN)
Official PyTorch implementation of EXE-GAN. [Homepage] [paper] [demo_youtube] [demo_bilibili]
Notice
Our paper was first released on Sun, 13 Feb 2022. We are thankful for the community's recognition and attention to our project. We also recognized that there have been some great papers published after ours, and we encourage you to check out their projects as well:
- Paint by Example, codes (released at Wed, 23 Nov 2022, CVPR 2023)
- Reference-Guided Face Inpainting, codes (released at Mon, 13 Mar 2023, TCSVT 2023)
- PATMAT, codes (released at Wed, 12 Apr 2023, ICCV 2023)
Requirements
cd EXE-GAN project
pip install -r requirements.txt
- Note that other versions of PyTorch (e.g., higher than 1.7) also work well, but you have to install the corresponding CUDA version.
What we have released
- [x] Training and testing codes
- [x] Pre-trained models
Training
- Prepare your dataset (download FFHQ, and CelebA-HQ)
- The folder structure of training and testing data is shown below:
root/
test/
xxx.png
...
xxz.png
train/
xxx.png
...
xxz.png
-
Prepare pre-trained checkpoints: Arcface.pth and psp_ffhq_encode.pt (put models in ./pre-train)
-
Training
python train.py --path /root/train --test_path /root/test --size 256 --embedding_weight 0.1 --id_loss_weight 0.1 --percept_loss_weight 0.5 --arcface_path ./pre-train/Arcface.pth --psp_checkpoint_path ./pre-train/psp_ffhq_encode.pt
Testing
Notice
-
For editing images from the web, photos should be aligned by face landmarks and cropped to 256x256 by align_face.
-
Irregular masks (optional, if you would like to test on irregular masks, download Testing Set masks)
-
(use our FFHQ_60k pre-trained model EXE_GAN_model.pt or trained *pt file by yourself.)
python test.py --path /root/test --size 256 --psp_checkpoint_path ./pre-train/psp_ffhq_encode.pt --ckpt ./checkpoint/EXE_GAN_model.pt --mask_root ./dataset/mask/testing_mask_dataset --mask_file_root ./dataset/mask --mask_type test_6.txt
- mask_root Irregular masks root
- mask_file_root file name list file folder
- mask_type could be ["center", "test_2.txt", "test_3.txt", "test_4.txt", "test_5.txt", "test_6.txt", "all"]
- If you don't have irregular masks, just using center masks is also fine.
python test.py --path /root/test --size 256 --psp_checkpoint_path ./pre-train/psp_ffhq_encode.pt --ckpt ./checkpoint/EXE_GAN_model.pt --mask_type center
Exemplar-guided facial image recovery
Notice
- For editing images from the web, photos should be aligned by face landmarks and cropped to 256x256 by align_face.
(use our FFHQ_60k pre-trained model EXE_GAN_model.pt or trained *pt file by yourself.)
python guided_recovery.py --psp_checkpoint_path ./pre-train/psp_ffhq_encode.pt --ckpt ./checkpoint/EXE_GAN_model.pt --masked_dir ./imgs/exe_guided_recovery/mask --gt_dir ./imgs/exe_guided_recovery/target --exemplar_dir ./imgs/exe_guided_recovery/exemplar --sample_times 10 --eval_dir ./recover_out
- masked_dir: mask input folder
- gt_dir: the input gt_dir, used for editing
- exemplar_dir: exemplar_dir, the exemplar dir, for guiding the editing
- eval_dir: output dir
![]() |
![]() |
![]() |
![]() |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Ground-truth | Mask | Exemplar | Inpainted |
- Inherent diversity, set
--sample_times 10
higher to get more diverse results.
![]() |
![]() |
![]() |
![]() |
---|---|---|---|
diversity 1 | diversity 2 | diversity 3 | diversity 4 |
Exemplar guided style mixing
Notice
- For editing images from the web, photos should be aligned by face landmarks and cropped to 256x256 by align_face.
(use our FFHQ_60k pre-trained model EXE_GAN_model.pt or trained *pt file by yourself.)
python exemplar_style_mixing.py --psp_checkpoint_path ./pre-train/psp_ffhq_encode.pt --ckpt ./checkpoint/EXE_GAN_model.pt --masked_dir ./imgs/exe_guided_recovery/mask --gt_dir ./imgs/exe_guided_recovery/target --exemplar_dir ./imgs/exe_guided_recovery/exemplar --sample_times 2 --eval_dir mixing_out
- masked_dir: mask input folder
- gt_dir: the input gt_dir, used for editing
- exemplar_dir: exemplar_dir, the exemplar dir, for guiding the editing
- eval_dir: output dir
- Inputs are shown below:
![]() |
![]() |
![]() |
![]() |
---|---|---|---|
Ground-truth | Mask | Exemplar 1 | Exemplar 2 |
- Style mixing results
![]() |
![]() |
![]() |
![]() |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Editing masks by yourself
We also uploaded the mask editing tool. You can try this tool to generate your masks for editing.
python mask_gui.py
Bibtex
- If you find our code useful, please cite our paper:
@misc{lu2022inpainting, title={Do Inpainting Yourself: Generative Facial Inpainting Guided by Exemplars}, author={Wanglong Lu and Hanli Zhao and Xianta Jiang and Xiaogang Jin and Yongliang Yang and Min Wang and Jiankai Lyu and Kaijie Shi}, year={2022}, eprint={2202.06358}, archivePrefix={arXiv}, primaryClass={cs.CV} }
Acknowledgements
Model details and custom CUDA kernel codes are from official repositories: https://github.com/NVlabs/stylegan2
Codes for Learned Perceptual Image Patch Similarity, LPIPS came from https://github.com/richzhang/PerceptualSimilarity
To match FID scores more closely to tensorflow official implementations, I have used FID Inception V3 implementations in https://github.com/mseitzer/pytorch-fid