magic-animate
magic-animate copied to clipboard
MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
continous -> continuous
Hi, Thanks for the amazing work! Just gonna open a thread to share our video outputs here for everyone to showoff https://file.io/DlfGouv1V5Ts
How did the demo vids achieve facial movements when Densepose does not contain facial information?
In [this demo](https://github.com/magic-research/magic-animate/blob/main/assets/teaser/t3.gif), we can see the girl moving her mouth "lip syncing". However, since the Densepose does not contain any facial information(it's just blobs), and the initial image only...
 ```sh 44%|████████████████████████████████████ | 11/25 [02:48> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ```
The yaml install didn't work for me so I tried via the requirements.txt. Had to install cuDNN, and made some progress on their errors that I received, but then ran...
I would like to know more training details.
I developed this script to simplify the setup of the Python virtual environment and model downloads. If this PR is accepted, the user just need to run: ```bash wget -q...
Hi. Thanks for the great work. What was the reasoning behind the choice for densepose? I see that the segmentation masks are not always accurate so it leads to some...
I'm sure many people want to only generate the final product video instead of the current version where it stitches together the source, the Densepose video, and the final video....