Awesome-Controllable-Video-Diffusion
Awesome-Controllable-Video-Diffusion copied to clipboard
Awesome Controllable Video Generation with Diffusion Models
Awesome-Controllable-Video-Diffusion
Awesome Controllable Video Generation with Diffusion Models.
Table of Contents
- Pose Control
- Audio Control
- Expression Control
- Universal Control
- Camera Control
- Trajectory Control
- Subject Control
- Area Control
- Video Control
- Brain Control
- ID Control
Pose Control
UniAnimate-DiT: Human Image Animation with Large-Scale Video Diffusion Transformer
OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models
EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation
📄 Paper | 🌐 Project Page | 💻 Code
MikuDance: Animating Character Art with Mixed Motion Dynamics
📄 Paper | 🌐 Project Page | 💻 Code
Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control
📄 Paper | 🌐 Project Page | 💻 Code
TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio-Motion Embedding and Diffusion Interpolation
📄 Paper | 🌐 Project Page | 💻 Code
DynamicPose: A robust image-to-video framework for portrait animation driven by pose sequences
Alignment is All You Need: A Training-free Augmentation Strategy for Pose-guided Video Generation
Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos
📄 Paper | 🌐 Project Page | 💻 Code
Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation
DreaMoving: A Human Video Generation Framework based on Diffusion Models
📄 Paper | 🌐 Project Page | 💻 Code
MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion
📄 Paper | 🌐 Project Page | 💻 Code
MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
📄 Paper | 🌐 Project Page | 💻 Code
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
📄 Paper | 🌐 Project Page | 💻 Code
Magic-Me: Identity-Specific Video Customized Diffusion
📄 Paper | 🌐 Project Page | 💻 Code
DisCo: Disentangled Control for Referring Human Dance Generation in Real World
📄 Paper | 🌐 Project Page | 💻 Code
Human4DiT: Free-view Human Video Generation with 4D Diffusion Transformer
MimicMotion : High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance
📄 Paper | 🌐 Project Page | 💻 Code
Follow-Your-Pose v2: Multiple-Condition Guided Character Image Animation for Stable Pose Control
HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation
📄 Paper | 🌐 Project Page | 💻 Code
MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation.
MDM: Human Motion Diffusion Model
📄 Paper | 🌐 Project Page | 💻 Code
Audio Control
FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis
📄 Paper | 🌐 Project Page | 💻 Code
Every Image Listens, Every Image Dances: Music-Driven Image Animation
MEMO: Memory-Guided Diffusion for Expressive Talking Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation
📄 Paper | 🌐 Project Page | 💻 Code
Co-Speech Gesture Video Generation via Motion-Decoupled Diffusion Model
📄 Paper | 🌐 Project Page | 💻 Code
Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptation
📄 Paper | 🌐 Project Page | 💻 Code
MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation
Speech Driven Video Editing via an Audio-Conditioned Diffusion Model
📄 Paper | 🌐 Project Page | 💻 Code
Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation
📄 Paper | 🌐 Project Page | 💻 Code
Listen, denoise, action! Audio-driven motion synthesis with diffusion models
📄 Paper | 🌐 Project Page | 💻 Code
CoDi: Any-to-Any Generation via Composable Diffusion
📄 Paper | 🌐 Project Page | 💻 Code
Generative Disco: Text-to-Video Generation for Music Visualization
AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion
EMO: Emote Portrait Alive Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
📄 Paper | 🌐 Project Page | 💻 Code
Context-aware Talking Face Video Generation
Expression Control
FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers
📄 Paper | 🌐 Project Page | 💻 Code
X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention
📄 Paper | 🌐 Project Page | 💻 Code
HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level and Fidelity-Rich Conditions in Diffusion Models
📄 Paper | 🌐 Project Page | 💻 Code
SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers
📄 Paper | 🌐 Project Page | 💻 Code
DreamActor-M1: Holistic, Expressive and Robust Human Image Animation with Hybrid Guidance
Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation
📄 Paper | 🌐 Project Page | 💻 Code
Echomimic: Lifelike audio-driven portrait animations through editable landmark conditions
📄 Paper | 🌐 Project Page | 💻 Code
Universal Control
VACE: All-in-One Video Creation and Editing
📄 Paper | 🌐 Project Page | 💻 Code
ControlNeXt: Powerful and Efficient Control for Image and Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models
📄 Paper | 🌐 Project Page | 💻 Code
ControlVideo: Training-free Controllable Text-to-Video Generation
TrackGo: A Flexible and Efficient Method for Controllable Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
VideoComposer: Compositional Video Synthesis with Motion Controllability
📄 Paper | 🌐 Project Page | 💻 Code
Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance
📄 Paper | 🌐 Project Page | 💻 Code
UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified Attention Control
📄 Paper | 🌐 Project Page | 💻 Code
SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models
📄 Paper | 🌐 Project Page | 💻 Code
VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet
📄 Paper | 🌐 Project Page | 💻 Code
Cinemo: Consistent and Controllable Image Animation with Motion Diffusion Models
📄 Paper | 🌐 Project Page | 💻 Code
Camera Control
MotionMaster: Training-free Camera Motion Transfer For Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
CinePreGen: Camera Controllable Video Previsualization via Engine-powered Diffusion
CamViG: Camera Aware Image-to-Video Generation with Multimodal Transformers
Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion
📄 Paper | 🌐 Project Page | 💻 Code
MotionCtrl: A Unified and Flexible Motion Controller for Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
CameraCtrl: Enabling Camera Control for Text-to-Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control
Controlling Space and Time with Diffusion Models
CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation
Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control
HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation
📄 Paper | 🌐 Project Page | 💻 Code
Training-free Camera Control for Video Generation
Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text
📄 Paper | 🌐 Project Page | 💻 Code
MotionBooth: Motion-Aware Customized Text-to-Video Generation
DiffDreamer: Towards Consistent Unsupervised Single-view Scene Extrapolation with Conditional Diffusion Models
Trajectory Control
MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation
FreeTraj: Tuning-Free Trajectory Control in Video Diffusion Models
📄 Paper | 🌐 Project Page | 💻 Code
TrailBlazer: Trajectory Control for Diffusion-Based Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
📄 Paper | 🌐 Project Page | 💻 Code
Tora: Trajectory-oriented Diffusion Transformer for Video Generation
Controllable Longer Image Animation with Diffusion Models
MotionCtrl: A Unified and Flexible Motion Controller for Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
MotionBooth: Motion-Aware Customized Text-to-Video Generation
Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics
📄 Paper | 🌐 Project Page | 💻 Code
Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion
📄 Paper | 🌐 Project Page | 💻 Code
Generative Image Dynamics
Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation
Video Diffusion Models are Training-free Motion Interpreter and Controlle
Subject Control
Phantom: Subject-consistent video generation via cross-modal alignment
Tunnel Try-on: Excavating Spatial-temporal Tunnels for High-quality Virtual Try-on in Videos
Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion
📄 Paper | 🌐 Project Page | 💻 Code
ActAnywhere: Subject-Aware Video Background Generation
MotionBooth: Motion-Aware Customized Text-to-Video Generation
Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation
One-Shot Learning Meets Depth Diffusion in Multi-Object Videos
Area Control
Boximator: Generating Rich and Controllable Motions for Video Synthesis
Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts
📄 Paper | 🌐 Project Page | 💻 Code
AnimateAnything: Fine-Grained Open Domain Image Animation with Motion Guidance
📄 Paper | 🌐 Project Page | 💻 Code
Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion
Video Control
Customizing Motion in Text-to-Video Diffusion Models
MotionClone: Training-Free Motion Cloning for Controllable Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models
📄 Paper | 🌐 Project Page | 💻 Code
Motion Inversion for Video Customization
📄 Paper | 🌐 Project Page | 💻 Code
Brain Control
NeuroCine: Decoding Vivid Video Sequences from Human Brain Activties
ID Control
FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
Concat-ID: Towards Universal Identity-Preserving Video Synthesis
📄 Paper | 🌐 Project Page | 💻 Code
Ingredients: Blending Custom Photos with Video Diffusion Transformers
Identity-Preserving Text-to-Video Generation by Frequency Decomposition
📄 Paper | 🌐 Project Page | 💻 Code
VideoMaker: Zero-shot Customized Video Generation with the Inherent Force of Video Diffusion Models
📄 Paper | 🌐 Project Page | 💻 Code
Movie Gen: A Cast of Media Foundation Models
CustomCrafter: Customized Video Generation with Preserving Motion and Concept Composition Abilities
📄 Paper | 🌐 Project Page | 💻 Code
ID-Animator: Zero-Shot Identity-Preserving Human Video Generation
📄 Paper | 🌐 Project Page | 💻 Code
VideoBooth: Diffusion-based Video Generation with Image Prompts
📄 Paper | 🌐 Project Page | 💻 Code
Magic-Me: Identity-Specific Video Customized Diffusion