Open-AnimateAnyone
Open-AnimateAnyone copied to clipboard
Why is the temporal layer trained inStage 2 not initialized using the temporal layer weights in AnimateDiff?
Thanks for open source! The original paper mentions using the temporal layer weight of AnimateDiff for initialization, why does it not appear in the code? Looking forward to your reply!
Hi, you can combine unet, which you trained in the first stage, with ckpt from animatediff.
just like:
import torch
def merge_ckpts(ckpt1_path, ckpt2_path, output_path): ckpt1 = torch.load(ckpt1_path, map_location="cpu") ckpt2 = torch.load(ckpt2_path, map_location="cpu") merged_state_dict = {**ckpt1, **ckpt2} torch.save(merged_state_dict, output_path)
trained_unet_path = './pretrained_models/unet_stage_1.ckpt' motion_module_v1_path = './pretrained_models/AnimateDiff/mm_sd_v15.ckpt' merged_ckpt_path = "./pretrained_models/merge_trained_unet_motion.ckpt"
Why don't you use it? Is the performance not good?! @guoqincode
Why don't you use it? Is the performance not good?! @guoqincode
I used it.
Why don't you use it? Is the performance not good?! @guoqincode
I used it.
Have you do comparison experiments? Is training temporal module from scratch much different from fine-tuning networks?