White-box-Cartoonization icon indicating copy to clipboard operation
White-box-Cartoonization copied to clipboard

Official tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”




[CVPR2020]Learning to Cartoonize Using White-box Cartoon Representations

project page | paper | twitter | zhihu | bilibili | facial model

  • Tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”.
  • Improved method for facial images are now available:
  • https://github.com/SystemErrorWang/FacialCartoonization

Use cases

Scenery

Food

Indoor Scenes

People

More Images Are Shown In The Supplementary Materials

Online demo

  • Some kind people made online demo for this project
  • Demo link: https://cartoonize-lkqov62dia-de.a.run.app/cartoonize
  • Code: https://github.com/experience-ml/cartoonize
  • Sample Demo: https://www.youtube.com/watch?v=GqduSLcmhto&feature=emb_title

Prerequisites

  • Training code: Linux or Windows
  • NVIDIA GPU + CUDA CuDNN for performance
  • Inference code: Linux, Windows and MacOS

How To Use

Installation

  • Assume you already have NVIDIA GPU and CUDA CuDNN installed
  • Install tensorflow-gpu, we tested 1.12.0 and 1.13.0rc0
  • Install scikit-image==0.14.5, other versions may cause problems

Inference with Pre-trained Model

  • Store test images in /test_code/test_images
  • Run /test_code/cartoonize.py
  • Results will be saved in /test_code/cartoonized_images

Train

  • Place your training data in corresponding folders in /dataset
  • Run pretrain.py, results will be saved in /pretrain folder
  • Run train.py, results will be saved in /train_cartoon folder
  • Codes are cleaned from production environment and untested
  • There may be minor problems but should be easy to resolve
  • Pretrained VGG_19 model can be found at following url: https://drive.google.com/file/d/1j0jDENjdwxCDb36meP6-u5xDBzmKBOjJ/view?usp=sharing

Datasets

  • Due to copyright issues, we cannot provide cartoon images used for training
  • However, these training datasets are easy to prepare
  • Scenery images are collected from Shinkai Makoto, Miyazaki Hayao and Hosoda Mamoru films
  • Clip films into frames and random crop and resize to 256x256
  • Portrait images are from Kyoto animations and PA Works
  • We use this repo(https://github.com/nagadomi/lbpcascade_animeface) to detect facial areas
  • Manual data cleaning will greatly increace both datasets quality

Acknowledgement

We are grateful for the help from Lvmin Zhang and Style2Paints Research

License

  • Copyright (C) Xinrui Wang All rights reserved. Licensed under the CC BY-NC-SA 4.0
  • license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
  • Commercial application is prohibited, please remain this license if you clone this repo

Citation

If you use this code for your research, please cite our paper:

@InProceedings{Wang_2020_CVPR, author = {Wang, Xinrui and Yu, Jinze}, title = {Learning to Cartoonize Using White-Box Cartoon Representations}, booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2020} }

中文社区

我们有一个除了技术什么东西都聊的以技术交流为主的宇宙超一流二次元相关技术交流吹水群“纸片协会”。如果你一次加群失败,可以多次尝试。

纸片协会总舵:599929904