AnimateAnyone
AnimateAnyone copied to clipboard
Collaborators for an Implementation (Awaiting Code Release...)
the project is interesting in the context of security, allowing for the replication of dangerous scenes.
Therefore, I am proceeding with the implementation using the few details provided in the article.
I have already set up a draft for the temporal part and am testing various trainings, adapting ControlLDM as I inferred from the images in the article. I am seeing the first results, and most of the details are maintained, but the other network, the one that preserves the details, is missing.
It is clear from the videos that it is done in steps, on groups of frames because the jump is noticeable when looking at, for example, the background.
Is anyone working on sketching out the detail preservation part?
Hey there, bro! You can check out my simple implementation at this link: https://github.com/guoqincode/AnimateAnyone-unofficial. Feel free to take a look and see how it works!