Global-and-Local-Attention-Based-Free-Form-Image-Inpainting icon indicating copy to clipboard operation
Global-and-Local-Attention-Based-Free-Form-Image-Inpainting copied to clipboard

Official implementation of "Global and local attention-based free-form image inpainting"

PWC

I am currently refactoring the codes for the latest version of PyTorch. I will update the codes and upload the pretrained models (e.g. Places and CelebA) soon. Apologies for the inconvenience.

Please checkout "sensor_version" for Places2 weights. Please let me know if you face any issue.

This is the official implementation of the paper "Global and Local Attention-Based Free-Form Image Inpainting" published in Sensors (paper). Currently we are reformatting the codes. We will upload the pretrained models soon.

Prerequisite

  • Python3
  • PyTorch 1.0+ (The code works up to PyTorch 1.4. There seems to be an auto-grad problem with PyTorch 1.5. I will update the code for PyTorch 1.5 after finding the underlying issue.)
  • Torchvision 0.2+
  • PyYaml

Citation

If you find our paper and code beneficial for your work, please consider citing us!

@article{uddin2020global,
  title={Global and Local Attention-Based Free-Form Image Inpainting},
  author={Uddin, SM and Jung, Yong Ju},
  journal={Sensors},
  volume={20},
  number={11},
  pages={3204},
  year={2020},
  publisher={Multidisciplinary Digital Publishing Institute}
}

How to train

  • Set directory path in "configs/config.yaml". -- Set dataset name, if needed. -- If the dataset has subfolders, set "data_with_subfolder" to "True".
  • Run python train.py --config configs/config.yaml
  • To resume, set "resume" to True in "configs/config.yaml". Currently it overwrites the previous checkpoints. Updated code will have checkpoints listed.
  • To view training, run
    tensorboard --logdir checkpoints/DATASET_NAME/hole_benchmark

How to test

  • Modify "test_single.py" as per need and run.
  • Bulk testing code will be uploaded soon.
  • Pretrained models will be uploaded soon.

Some Results

  • Places dataset alt text
  • ImageNet dataset alt text
  • CelebA dataset alt text
  • Ablation study of the modules alt text

Acknowledgement

  • Code base: This code is heavily relied on this repo. Kudus!!!