AutoToon
AutoToon copied to clipboard
AutoToon Code
This directory contains the code for the AutoToon model, described in our paper AutoToon: Automatic Geometric Warping for Face Cartoon Generation published at WACV'20. See our project page here. This code takes in an input image (not necessarily aligned, and can also contain multiple faces), and outputs six images that represent the output of the AutoToon model. The instructions and specs for usage are detailed in test.py. Please cite our work as described below if you use it.
The components of the inference codebase are as follows:
test.pyis the executable inference code. The usage and specs are detailed inside the main function. For convenience, example usage to runtest.pyistest.py --in_dir in/ --out_dir out/ --scale 1to run the inference code using the pretrained weights with the input image directory./in/and output directory./out/, and with exaggeration scale1for the model output.test_utils.pycontains the helper functions and code for inference, and primarily those used for generating and saving the images.train.pyis the executable training code. The usage ispython train.py --root ./dataset --epochs 5000 --batch_size 16 --lrd 0.95 --save_dir ./checkpointsto train an AutoToon model.--rootis the root directory containing the AutoToon dataset (with aligned photos, caricatures, and ground truth warps),--epochsis the number of epochs to train,--batch_sizeis the batch size to use,--lrdis the learning rate decay to use, and--save_diris the directory in which to save model checkpoints as the model is trained. Other options, including continued training from a given checkpoint, are further detailed in the file.dataset.pyis the dataset definition file.AutoToonDatasetis the dataset class to be used (seetrain.py). The dataset train-validation split can be found in the function_make_train_test_split.face_alignment.pyandlandmarks_detector.pyare used for the facial alignment in pre-processing the input images. Sources are attributed intest.py.modelsis the folder containing the model code and weights for the AutoToon model.AutoToon.pyis the model definition andautotoon_model.pthcontains the pretrained weights for the final model.vggface2_senet.pyis the model definition for the SENet module used in AutoToon, andsenet50_ft_weight.pklcontains the pretrained, finetuned weights for the SENet 50 on the VGGFace2 dataset, as provided by the authors.inis an example folder in which to put the desired input images. All images in this directory will be processed serially.outis an example folder to which the outputs of the model will be written. Both input and output directories are specified as arguments to the testing code intest.py.
Setup
The code was tested using the following dependencies in addition to a default Anaconda installation:
- tensorflow 2.2.0
- Keras 2.4.3
- dlib 19.20.0
Usage
test.py is the executable inference code. The usage and specs are detailed inside the main function. For convenience, example usage to run test.py is test.py --in_dir in/ --out_dir out/ --scale 1 to run the inference code using the pretrained weights with the input image directory ./in/ and output directory ./out/, and with exaggeration scale 1 for the model output.
train.py is the executable training code. The usage is python train.py --root ./dataset --epochs 5000 --batch_size 16 --lrd 0.95 --save_dir ./checkpoints to train an AutoToon model. --root is the root directory containing the AutoToon dataset (with aligned photos, caricatures, and ground truth warps), --epochs is the number of epochs to train, --batch_size is the batch size to use, --lrd is the learning rate decay to use, and --save_dir is the directory in which to save model checkpoints as the model is trained. Other options, including continued training from a given checkpoint, are further detailed in the file.
Outputs
Images are passed through the AutoToon model, and the following outputs are produced and written to the output directory:
IMG_NAME_orig.jpg, the original cropped and centered input image,IMG_NAME_out.jpg, the output of the AutoToon model for the input image,IMG_NAME_quiver.jpg, the warping field quiver plot generated by the model,IMG_NAME_overlaid.jpg, the overlaid warping field quiver plot on top of the output image,IMG_NAME_xflow.jpg, the visualized heatmap for the x-direction warping field with the overlaid quiver plot,IMG_NAME_yflow.jpg, the visualized heatmap for the y-direction warping field with the overlaid quiver plot.
Citation
Gong, Julia, Yannick Hold-Geoffroy, and Jingwan Lu. "AutoToon: Automatic Geometric Warping for Face Cartoon Generation." In The IEEE Winter Conference on Applications of Computer Vision, pp. 360-369. 2020.