magic-animate
magic-animate copied to clipboard
MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
Thank you for your work. I want to know when is training code is released.
I have disabled xformers and make torch device to mps. It is going to inference, but then crashed as it want to allocate 8GB memory....

when I run the scripts I get this funny error: ```python /mnt/Samsung980Pro/WORK/AI/arch/magic-animate/magicanimate/pipelines/pipeline_animation.py:43: FutureWarning: Importing `DiffusionPipeline` or `ImagePipelineOutput` from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead. from diffusers.pipeline_utils import DiffusionPipeline...
as the title ,i use vid2densepose to generate densepose of driving videos, it happens to be shuttering and blurry flicker while i found that there are no such matters in...
Can you share how to obtain the LAION-Human dataset, which is used for the first stage of the paper?
"Spatial Transformer Networks" or "CBAM: Convolutional Block Attention Module"?
Hello, I have reviewed this paper and think it is done very well. I have reviewed the issue of generating motion videos using dense poses in iusse and have also...
It is reported that the real images generated by SDXL have higher resolution and accuracy, which can reduce the artifacts and errors generated and improve the similarity with the input...