qguopku

Results 29 comments of qguopku

DoHLyzer/meter/features/context/packet_flow_key.py Line33 should be if direction == packet_direction.PacketDirection.FORWARD:

Hey there, bro! You can check out my simple implementation at this link: https://github.com/guoqincode/AnimateAnyone-unofficial. Feel free to take a look and see how it works!

I don't think it's normal... It looks scary.

I will release the pre-trained model as soon as possible, please be patient, it may not take a few days.

If it's a texture issue, I have two suggestions. 1. Train more steps 2. Check if unet is updated

Hi, you can look at the description of spatial-attention in the paper, we just need to take the first half.

You can retrain the last layer of the animatediff from zero initialization.

I feel strange, I am at 512 resolution, single card bs=8.

> > I feel strange, I am at 512 resolution, single card bs=8. > > 当我把"sample_n_frames: 16mixed_precision_training" and "enable_xformers_memory_efficient_attention"设置为True后,可以正常跑起来。但是还有个问题,在train_hack.py里设置poseguider的输出为320维(poseguider = PoseGuider(noise_latent_channels=320)),但是在推理的时候python3 -m pipelines.animation_stage_1 --config configs/prompts/animation_stage_1.yaml,加载poseguider模型时,设置的channel是4,model = PoseGuider(noise_latent_channels=4),而且在后续的pipeline 里面使用的方式对应的也是channel为4的情况(latent_model_input = self.scheduler.scale_model_input(latent_model_input,...

Hi, I just forgot to change it, sorry.