Open-AnimateAnyone
Open-AnimateAnyone copied to clipboard
I got decent results, but some of the samples were in the wrong color
Hello, @guoqincode!
The samples in the test set worked fine, but with pure black clothes, I often got black and white results. Have you ever run into similar problems.
Here is the reference image, could you run your ckpt to see if the color is normal?
video 1 looks pretty good 👍 !how many steps you trained the stage 1 ? I trained 13000 steps but still can‘t control the pose :( .
Hello @luyvlei, the results are indeed pretty decent. Could you please share what steps do I need to follow to get such results? Thanks so much in advance.
You can retrain the last layer of the animatediff from zero initialization.
You can retrain the last layer of the animatediff from zero initialization.
@guoqincode But in stage1, the color is already wrong. In stage one the color will random whitening but in stage two, it almost be pure white. I have tried to zero init the proj_out layer parameters of animatediff ckpt and got the result above, still not satisfactory. Here is my wechat number 15365532260, cloud you add me? I want to communicate with you.
Hello @luyvlei, the results are indeed pretty decent. Could you please share what steps do I need to follow to get such results? Thanks so much in advance.
Just following the paper, 30000step for stage1(bs64) and 10000step for stage2(bs4) is enough.
video 1 looks pretty good 👍 !how many steps you trained the stage 1 ? I trained 13000 steps but still can‘t control the pose :( .
I have a version of my own implementation that poseguider will work in about 2000 steps, but it does not seems to be much different from this code's implementation
Hello @luyvlei, the results are indeed pretty decent. Could you please share what steps do I need to follow to get such results? Thanks so much in advance.
Just following the paper, 30000step for stage1(bs64) and 10000step for stage2(bs4) is enough.
Did you use the mixed precision training?
Hello @luyvlei, the results are indeed pretty decent. Could you please share what steps do I need to follow to get such results? Thanks so much in advance.
Just following the paper, 30000step for stage1(bs64) and 10000step for stage2(bs4) is enough.
Did you use the mixed precision training?
Yes, fp16
video 1 looks pretty good 👍 !how many steps you trained the stage 1 ? I trained 13000 steps but still can‘t control the pose :( .
I have a version of my own implementation that poseguider will work in about 2000 steps, but it does not seems to be much different from this code's implementation
thanks for reply,using hack training can solve my problem~
video 1 looks pretty good 👍 !how many steps you trained the stage 1 ? I trained 13000 steps but still can‘t control the pose :( .
I have a version of my own implementation that poseguider will work in about 2000 steps, but it does not seems to be much different from this code's implementation
thanks for reply,using hack training can solve my problem~
hello sir, could you tell me how to run stage 1 inference? I used train_hack.py to train, and I modifed like this: ##from models.PoseGuider import PoseGuider from hack_poseguider import Hack_PoseGuider as PoseGuider ##from diffusers.models import UNet2DConditionModel from hack_unet2d import Hack_UNet2DConditionModel as UNet2DConditionModel thank you a lot!
// I figured out, I forget to change the poseguider channel to 320
Hello, @guoqincode! The samples in the test set worked fine, but with pure black clothes, I often got black and white results. Have you ever run into similar problems.
![]()
![]()
![]()
![]()
Here is the reference image, could you run your ckpt to see if the color is normal?
hi sir, could you tell me do you change any part of the train_hack.py? I trained 1 stage in 8 A100 with 30000 steps, but the face is terrible, could you tell me some details about your training?
hi sir, could you tell me do you change any part of the train_hack.py? I trained 1 stage in 8 A100 with 30000 steps, but the face is terrible, could you tell me some details about your training?
By increasing the resolution and change the vae, the face can be improved
hi sir, could you tell me do you change any part of the train_hack.py? I trained 1 stage in 8 A100 with 30000 steps, but the face is terrible, could you tell me some details about your training?
By increasing the resolution and change the vae, the face can be improved
Hello, I would like to ask specifically, does increasing the resolution mean increasing the resolution during training or during inference?