GDN_Inpainting
GDN_Inpainting copied to clipboard
Pixel-wise Dense Detector for Image Inpainting (PG 2020)
GDN_Inpainting
An open source code for paper "Pixel-wise Dense Detector for Image Inpainting" (PG 2020)

Deep inpainting technique fills the semantically correct and visually plausible contents in the missing regions of corrupted images. Above results are presented by our framework.
Prerequisites
- Ubuntu 16.04
- Python 3
- NVIDIA GPU CUDA + cuDNN
- TensorFlow 1.12.0
Usage
Set up
- Clone this repo:
git clone https://github.com/Evergrow/GDN_Inpainting.git
cd GDN_Inpainting
- Install TensorFlow and dependencies
- Download datasets: We use Places2, CelebA-HQ, and Paris Street-View datasets. Some common inpainting datasets such as CelebA and ImageNet are also available.
- Collect masks: We upload the script processing raw mask QD-IMD as the training mask. Liu et al. provides 12k irregular masks as the testing mask. Note that the square mask is not a good choice for training our framewrok, while the test mask is freestyle.
Training
- Modify gpu id, dataset path, mask path, and checkpoint path in the config file. Adjusting some other parameters if you like.
- Run
python train.py
and view training progresstensorboard --logdir [path to checkpoints]
Testing
Choose the input image, mask and model to test:
python test.py --image [input path] --mask [mask path] --output [output path] --checkpoint_dir [model path]
Pretrained models
Celeba-HQ and Places2 pretrained models are released for quick testing. Download the models using Google Drive links and move them into your ./checkpoints directory.