audio-driven-talking-face topic
wav2lip_288x288
SadTalker
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
EmoGen
PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)
sd-wav2lip-uhq
Wav2Lip UHQ extension for Automatic1111
Awesome_Audio-driven_Talking-Face-Generation
A curated list of resources of audio-driven talking face generation
digitaltwin
Using a single image and just 10 seconds of sample audio, our project enables you to create a video where it appears as if you're speaking the desired text.
SyncTalk
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
IP_LAP
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
SadTalker_ModelScope
Use one line code to call SadTalker API with modelscope
NeRFFaceSpeech_Code
One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024