DELTA icon indicating copy to clipboard operation
DELTA copied to clipboard

Learning Disentangled Avatars with Hybrid 3D Representations. (Face, Body, Hair and Clothing)

DELTA: Learning Disentangled Avatars with Hybrid 3D Representations

teaser

This is the Pytorch implementation of DELTA. More details please check our Project page.

DELTA leans a compositional avatar using explicit Mesh and implicit NeRF represetntaions.
DELTA allows us to synthesize new views of the reconstructed avatar, and to animate the avatar with SMPL-X identity shape and pose control.
The disentanglement of the body and hair/clothing further enables us to transfer hairstyle/clothing between subjects for virtual try-on applications.
The key features:

  1. animate the avatar by changing body poses (including hand articulation and facial expressions),
  2. synthesize novel views of the avatar, and
  3. transfer hair/clothing between avatars for virtual try-on applications.

We have examples for reconstructing avatars for face/upper-body videos and full-body videos.
For full-body video, please check SCARF.
For generating compositional avatar from text, please check TECA.

We also provide code for data processing here, which includes fitting face, neck and shoulder to a single image or a monocular video.

Getting Started

Clone the repo:

git clone https://github.com/yfeng95/DELTA
cd DELTA

Requirements

bash install_conda.sh

If you have problems when installing pytorch3d, please follow their instructions.

Download data

bash fetch_data.sh

Play with trained avatars

  • check training frames:
python main_demo.py --expdir exps --exp_name  person_0004 --visualize capture 

capture
  • novel view synthesis of given frame id:
python main_demo.py --expdir exps --exp_name  person_0004 --visualize novel_view --frame_id 0

novel view
  • extract mesh and visualize
python main_demo.py --expdir exps --exp_name  person_0004 --visualize extract_mesh --frame_id 0

This will also save mesh objects (body only and body with hair) that you can open with Meshlab, green is for extracted hair geometry.

change shape
  • animate with given animation sequences
python main_demo.py --expdir exps --exp_name  person_0004 --visualize animate
  • change body shape
python main_demo.py --expdir exps --exp_name  person_0004 --visualize change_shape

change shape
  • Hairstyle transfer Transfer hairstyle and visualize:
python main_demo.py --expdir exps --exp_name person_2_train     --body_model_path exps/released_version/person_0004/model.tar  --visualize novel_view --max_yaw 20 

change shape

Training

  • training DELTA
python main_train.py --expdir exps --group training_hybrid \
     --exp_cfg configs/exp/face/hybrid_ngp.yml \
     --data_cfg exps/released_version/person_0004/data_config.yaml 
  • training NeRF only
python main_train.py --expdir exps --group training_nerf \
     --exp_cfg configs/exp/face/nerf_ngp.yml \
     --data_cfg exps/released_version/person_0004/data_config.yaml 
  • training with your own data

check here to prepare data with your own videos, the processed data will be like this:

visualize data

To process full body video, please check here.

Then run:

python main_train.py --exp_cfg configs/exp/face/hybrid_ngp.yml \
     --data_cfg [data cfg file]

Citation

@inproceedings{Feng2022scarf,
    author = {Feng, Yao and Yang, Jinlong and Pollefeys, Marc and Black, Michael J. and Bolkart, Timo},
    title = {Capturing and Animation of Body and Clothing from Monocular Video},
    year = {2022},
    booktitle = {SIGGRAPH Asia 2022 Conference Papers},
    articleno = {45},
    numpages = {9},
    location = {Daegu, Republic of Korea},
    series = {SA '22}
} 
@article{Feng2023DELTA,
    author = {Feng, Yao and Liu, Weiyang and Bolkart, Timo and Yang, Jinlong and Pollefeys, Marc and Black, Michael J.},
    title = {Learning Disentangled Avatars with Hybrid 3D Representations},
    journal={arXiv},
    year = {2023}
} 

Acknowledgments

Here are some great resources we benefit from:

License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.