Awesome-Controllable-Video-Diffusion icon indicating copy to clipboard operation
Awesome-Controllable-Video-Diffusion copied to clipboard

Awesome Controllable Video Generation with Diffusion Models

Awesome-Controllable-Video-Diffusion

Awesome License: MIT

Awesome Controllable Video Generation with Diffusion Models.

Table of Contents

Pose Control

UniAnimate-DiT: Human Image Animation with Large-Scale Video Diffusion Transformer

📄 Paper | 💻 Code

OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models

📄 Paper | 🌐 Project Page

EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation

📄 Paper | 🌐 Project Page | 💻 Code

MikuDance: Animating Character Art with Mixed Motion Dynamics

📄 Paper | 🌐 Project Page | 💻 Code

Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control

📄 Paper | 🌐 Project Page | 💻 Code

TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio-Motion Embedding and Diffusion Interpolation

📄 Paper | 🌐 Project Page | 💻 Code

DynamicPose: A robust image-to-video framework for portrait animation driven by pose sequences

💻 Code

Alignment is All You Need: A Training-free Augmentation Strategy for Pose-guided Video Generation

📄 Paper

Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos

📄 Paper | 🌐 Project Page | 💻 Code

Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation

📄 Paper | 🌐 Project Page

DreaMoving: A Human Video Generation Framework based on Diffusion Models

📄 Paper | 🌐 Project Page | 💻 Code

MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion

📄 Paper | 🌐 Project Page | 💻 Code

MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model

📄 Paper | 🌐 Project Page | 💻 Code

Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance

📄 Paper | 🌐 Project Page | 💻 Code

Magic-Me: Identity-Specific Video Customized Diffusion

📄 Paper | 🌐 Project Page | 💻 Code

DisCo: Disentangled Control for Referring Human Dance Generation in Real World

📄 Paper | 🌐 Project Page | 💻 Code

Human4DiT: Free-view Human Video Generation with 4D Diffusion Transformer

📄 Paper | 🌐 Project Page

MimicMotion : High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance

📄 Paper | 🌐 Project Page | 💻 Code

Follow-Your-Pose v2: Multiple-Condition Guided Character Image Animation for Stable Pose Control

📄 Paper | 🌐 Project Page

HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation

📄 Paper | 🌐 Project Page | 💻 Code

MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation.

💻 Code

MDM: Human Motion Diffusion Model

📄 Paper | 🌐 Project Page | 💻 Code

Audio Control

FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis

📄 Paper | 🌐 Project Page | 💻 Code

Every Image Listens, Every Image Dances: Music-Driven Image Animation

📄 Paper

MEMO: Memory-Guided Diffusion for Expressive Talking Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation

📄 Paper | 🌐 Project Page | 💻 Code

Co-Speech Gesture Video Generation via Motion-Decoupled Diffusion Model

📄 Paper | 🌐 Project Page | 💻 Code

Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptation

📄 Paper | 🌐 Project Page | 💻 Code

MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation

📄 Paper | 💻 Code

Speech Driven Video Editing via an Audio-Conditioned Diffusion Model

📄 Paper | 🌐 Project Page | 💻 Code

Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation

📄 Paper | 🌐 Project Page | 💻 Code

Listen, denoise, action! Audio-driven motion synthesis with diffusion models

📄 Paper | 🌐 Project Page | 💻 Code

CoDi: Any-to-Any Generation via Composable Diffusion

📄 Paper | 🌐 Project Page | 💻 Code

Generative Disco: Text-to-Video Generation for Music Visualization

📄 Paper

AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion

📄 Paper

EMO: Emote Portrait Alive Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions

📄 Paper | 🌐 Project Page | 💻 Code

Context-aware Talking Face Video Generation

📄 Paper

Expression Control

FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers

📄 Paper | 🌐 Project Page | 💻 Code

X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention

📄 Paper | 🌐 Project Page | 💻 Code

HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level and Fidelity-Rich Conditions in Diffusion Models

📄 Paper | 🌐 Project Page | 💻 Code

SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers

📄 Paper | 🌐 Project Page | 💻 Code

DreamActor-M1: Holistic, Expressive and Robust Human Image Animation with Hybrid Guidance

📄 Paper | 🌐 Project Page

Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation

📄 Paper | 🌐 Project Page | 💻 Code

Echomimic: Lifelike audio-driven portrait animations through editable landmark conditions

📄 Paper | 🌐 Project Page | 💻 Code

Universal Control

VACE: All-in-One Video Creation and Editing

📄 Paper | 🌐 Project Page | 💻 Code

ControlNeXt: Powerful and Efficient Control for Image and Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models

📄 Paper | 🌐 Project Page | 💻 Code

ControlVideo: Training-free Controllable Text-to-Video Generation

📄 Paper | 💻 Code

TrackGo: A Flexible and Efficient Method for Controllable Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

VideoComposer: Compositional Video Synthesis with Motion Controllability

📄 Paper | 🌐 Project Page | 💻 Code

Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance

📄 Paper | 🌐 Project Page | 💻 Code

UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified Attention Control

📄 Paper | 🌐 Project Page | 💻 Code

SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models

📄 Paper | 🌐 Project Page | 💻 Code

VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet

📄 Paper | 🌐 Project Page | 💻 Code

Cinemo: Consistent and Controllable Image Animation with Motion Diffusion Models

📄 Paper | 🌐 Project Page | 💻 Code

Camera Control

MotionMaster: Training-free Camera Motion Transfer For Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

CinePreGen: Camera Controllable Video Previsualization via Engine-powered Diffusion

📄 Paper

CamViG: Camera Aware Image-to-Video Generation with Multimodal Transformers

📄 Paper

Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion

📄 Paper | 🌐 Project Page | 💻 Code

MotionCtrl: A Unified and Flexible Motion Controller for Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

CameraCtrl: Enabling Camera Control for Text-to-Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control

📄 Paper | 🌐 Project Page

Controlling Space and Time with Diffusion Models

📄 Paper | 🌐 Project Page

CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation

📄 Paper | 🌐 Project Page

Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control

📄 Paper | 🌐 Project Page

HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation

📄 Paper | 🌐 Project Page | 💻 Code

Training-free Camera Control for Video Generation

📄 Paper | 🌐 Project Page

Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text

📄 Paper | 🌐 Project Page | 💻 Code

MotionBooth: Motion-Aware Customized Text-to-Video Generation

📄 Paper | 💻 Code

DiffDreamer: Towards Consistent Unsupervised Single-view Scene Extrapolation with Conditional Diffusion Models

📄 Paper | 🌐 Project Page

Trajectory Control

MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation

📄 Paper | 🌐 Project Page

FreeTraj: Tuning-Free Trajectory Control in Video Diffusion Models

📄 Paper | 🌐 Project Page | 💻 Code

TrailBlazer: Trajectory Control for Diffusion-Based Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory

📄 Paper | 🌐 Project Page | 💻 Code

Tora: Trajectory-oriented Diffusion Transformer for Video Generation

📄 Paper | 🌐 Project Page

Controllable Longer Image Animation with Diffusion Models

📄 Paper | 🌐 Project Page

MotionCtrl: A Unified and Flexible Motion Controller for Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

MotionBooth: Motion-Aware Customized Text-to-Video Generation

📄 Paper | 💻 Code

Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics

📄 Paper | 🌐 Project Page | 💻 Code

Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion

📄 Paper | 🌐 Project Page | 💻 Code

Generative Image Dynamics

📄 Paper | 🌐 Project Page

Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation

📄 Paper

Video Diffusion Models are Training-free Motion Interpreter and Controlle

📄 Paper | 🌐 Project Page

Subject Control

Phantom: Subject-consistent video generation via cross-modal alignment

📄 Paper | 🌐 Project Page

Tunnel Try-on: Excavating Spatial-temporal Tunnels for High-quality Virtual Try-on in Videos

📄 Paper

Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion

📄 Paper | 🌐 Project Page | 💻 Code

ActAnywhere: Subject-Aware Video Background Generation

📄 Paper | 🌐 Project Page

MotionBooth: Motion-Aware Customized Text-to-Video Generation

📄 Paper | 💻 Code

Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation

📄 Paper | 💻 Code

One-Shot Learning Meets Depth Diffusion in Multi-Object Videos

📄 Paper

Area Control

Boximator: Generating Rich and Controllable Motions for Video Synthesis

📄 Paper | 🌐 Project Page

Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts

📄 Paper | 🌐 Project Page | 💻 Code

AnimateAnything: Fine-Grained Open Domain Image Animation with Motion Guidance

📄 Paper | 🌐 Project Page | 💻 Code

Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling

📄 Paper | 🌐 Project Page

Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion

📄 Paper | 🌐 Project Page

Video Control

Customizing Motion in Text-to-Video Diffusion Models

📄 Paper | 🌐 Project Page

MotionClone: Training-Free Motion Cloning for Controllable Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models

📄 Paper | 🌐 Project Page | 💻 Code

Motion Inversion for Video Customization

📄 Paper | 🌐 Project Page | 💻 Code

Brain Control

NeuroCine: Decoding Vivid Video Sequences from Human Brain Activties

📄 Paper

ID Control

FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

Concat-ID: Towards Universal Identity-Preserving Video Synthesis

📄 Paper | 🌐 Project Page | 💻 Code

Ingredients: Blending Custom Photos with Video Diffusion Transformers

📄 Paper | 💻 Code

Identity-Preserving Text-to-Video Generation by Frequency Decomposition

📄 Paper | 🌐 Project Page | 💻 Code

VideoMaker: Zero-shot Customized Video Generation with the Inherent Force of Video Diffusion Models

📄 Paper | 🌐 Project Page | 💻 Code

Movie Gen: A Cast of Media Foundation Models

📄 Paper

CustomCrafter: Customized Video Generation with Preserving Motion and Concept Composition Abilities

📄 Paper | 🌐 Project Page | 💻 Code

ID-Animator: Zero-Shot Identity-Preserving Human Video Generation

📄 Paper | 🌐 Project Page | 💻 Code

VideoBooth: Diffusion-based Video Generation with Image Prompts

📄 Paper | 🌐 Project Page | 💻 Code

Magic-Me: Identity-Specific Video Customized Diffusion

📄 Paper | 🌐 Project Page | 💻 Code