AutoToon icon indicating copy to clipboard operation
AutoToon copied to clipboard

AutoToon Code

This directory contains the code for the AutoToon model, described in our paper AutoToon: Automatic Geometric Warping for Face Cartoon Generation published at WACV'20. See our project page here. This code takes in an input image (not necessarily aligned, and can also contain multiple faces), and outputs six images that represent the output of the AutoToon model. The instructions and specs for usage are detailed in test.py. Please cite our work as described below if you use it.

The components of the inference codebase are as follows:

  • test.py is the executable inference code. The usage and specs are detailed inside the main function. For convenience, example usage to run test.py is test.py --in_dir in/ --out_dir out/ --scale 1 to run the inference code using the pretrained weights with the input image directory ./in/ and output directory ./out/, and with exaggeration scale 1 for the model output.
  • test_utils.py contains the helper functions and code for inference, and primarily those used for generating and saving the images.
  • train.py is the executable training code. The usage is python train.py --root ./dataset --epochs 5000 --batch_size 16 --lrd 0.95 --save_dir ./checkpoints to train an AutoToon model. --root is the root directory containing the AutoToon dataset (with aligned photos, caricatures, and ground truth warps), --epochs is the number of epochs to train, --batch_size is the batch size to use, --lrd is the learning rate decay to use, and --save_dir is the directory in which to save model checkpoints as the model is trained. Other options, including continued training from a given checkpoint, are further detailed in the file.
  • dataset.py is the dataset definition file. AutoToonDataset is the dataset class to be used (see train.py). The dataset train-validation split can be found in the function _make_train_test_split.
  • face_alignment.py and landmarks_detector.py are used for the facial alignment in pre-processing the input images. Sources are attributed in test.py.
  • models is the folder containing the model code and weights for the AutoToon model. AutoToon.py is the model definition and autotoon_model.pth contains the pretrained weights for the final model. vggface2_senet.py is the model definition for the SENet module used in AutoToon, and senet50_ft_weight.pkl contains the pretrained, finetuned weights for the SENet 50 on the VGGFace2 dataset, as provided by the authors.
  • in is an example folder in which to put the desired input images. All images in this directory will be processed serially.
  • out is an example folder to which the outputs of the model will be written. Both input and output directories are specified as arguments to the testing code in test.py.

Setup

The code was tested using the following dependencies in addition to a default Anaconda installation:

  • tensorflow 2.2.0
  • Keras 2.4.3
  • dlib 19.20.0

Usage

test.py is the executable inference code. The usage and specs are detailed inside the main function. For convenience, example usage to run test.py is test.py --in_dir in/ --out_dir out/ --scale 1 to run the inference code using the pretrained weights with the input image directory ./in/ and output directory ./out/, and with exaggeration scale 1 for the model output.

train.py is the executable training code. The usage is python train.py --root ./dataset --epochs 5000 --batch_size 16 --lrd 0.95 --save_dir ./checkpoints to train an AutoToon model. --root is the root directory containing the AutoToon dataset (with aligned photos, caricatures, and ground truth warps), --epochs is the number of epochs to train, --batch_size is the batch size to use, --lrd is the learning rate decay to use, and --save_dir is the directory in which to save model checkpoints as the model is trained. Other options, including continued training from a given checkpoint, are further detailed in the file.

Outputs

Images are passed through the AutoToon model, and the following outputs are produced and written to the output directory:

  • IMG_NAME_orig.jpg, the original cropped and centered input image,
  • IMG_NAME_out.jpg, the output of the AutoToon model for the input image,
  • IMG_NAME_quiver.jpg, the warping field quiver plot generated by the model,
  • IMG_NAME_overlaid.jpg, the overlaid warping field quiver plot on top of the output image,
  • IMG_NAME_xflow.jpg, the visualized heatmap for the x-direction warping field with the overlaid quiver plot,
  • IMG_NAME_yflow.jpg, the visualized heatmap for the y-direction warping field with the overlaid quiver plot.

Citation

Gong, Julia, Yannick Hold-Geoffroy, and Jingwan Lu. "AutoToon: Automatic Geometric Warping for Face Cartoon Generation." In The IEEE Winter Conference on Applications of Computer Vision, pp. 360-369. 2020.