SMILE
SMILE copied to clipboard
SMILE: Semantically-guided Multi-attribute Image and Layout Editing, ICCV Workshops 2021.
SMILE: Semantically-guided Multi-attribute Image and Layout Editing, ICCVW 2021.
Official PyTorch Implementation
[Paper :newspaper:] [Video :video_camera:] [Poster :scroll:] [Slides :pushpin:]
:sparkles: Results
SMILE can manipulate a source image into ab output image reflecting the attribute and style (e.g., eyeglasses, hat, hair, etc.) of a different person. More high-quality videos can be found in this link.
Checkout the project page for additional visualizations.
Overview of the method
:wrench: Download Pretrained Weights
bash download_weights.sh
:zap: Demo
python main.py --GPU=NO_CUDA --FAN --EYEGLASSES --GENDER --EARRINGS --HAT --BANGS --HAIR --TRAIN_MASK --MOD --SPLIT_STYLE --mode=demo --ref_demo Figures/ffhq_teaser --rgb_demo Figures/teaser_input.png --pretrained_model models/pretrained_models/smileSEM
This command should reproduce the teaser figure. Explanation of arguments:
-
--FAN
: Remove all shortcuts in the upsampling residual blocks and add skip connections with the adaptive wing based heatmap. -
--EYEGLASSES --GENDER --EARRINGS --HAT --BANGS --HAIR
: The selected attributes to manipulate. -
--TRAIN_MASK
: Use semantic maps instead of RGB. -
--MOD
: Use modulated convolutions. -
--SPLIT_STYLE
: Weight Gender with more dimensionality than the others. -
--ref_demo
: Folder with reference images. During demo an attribute classifier is going to extract every attribute to impose them on the--rgb_demo
. See teaser figure.
:earth_asia: Citation
If you find this work is useful for your research, please cite our paper:
@InProceedings{Romero_2021_ICCV,
author = {Romero, Andres and Van Gool, Luc and Timofte, Radu},
title = {SMILE: Semantically-Guided Multi-Attribute Image and Layout Editing},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2021},
pages = {1924-1933}
}