OHTA
OHTA copied to clipboard
[CVPR 2024] OHTA: One-shot Hand Avatar via Data-driven Implicit Priors
OHTA: One-shot Hand Avatar via Data-driven Implicit Priors
PICO, ByteDance
*Equal contribution †Corresponding author
:star_struck: Accepted to CVPR 2024

OHTA is a novel approach capable of creating implicit animatable hand avatars using just a single image. It facilitates 1) text-to-avatar conversion, 2) hand texture and geometry editing, and 3) interpolation and sampling within the latent space.
:mega: Updates
[06/2024] :star_struck: Code released! Please refer to OHTA-code.
[02/2024] :partying_face: OHTA is accepted to CVPR 2024! Working on code release!
:love_you_gesture: Citation
If you find our work useful for your research, please consider citing the paper:
@inproceedings{
zheng2024ohta,
title={OHTA: One-shot Hand Avatar via Data-driven Implicit Priors},
author={Zheng, Xiaozheng and Wen, Chao and Zhuo, Su and Xu, Zeran and Li, Zhaohu and Zhao, Yang and Xue, Zhou},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2024}
}