AnimateDiff icon indicating copy to clipboard operation
AnimateDiff copied to clipboard

Official implementation of AnimateDiff.

Results 191 AnimateDiff issues
Sort by recently updated
recently updated
newest added

https://paperswithcode.com/method/depthwise-separable-convolution#:~:text=While%20standard%20convolution%20performs%20the,a%20linear%20combination%20of%20the current setup ![Screenshot from 2024-03-16 06-33-04](https://github.com/guoyww/AnimateDiff/assets/289994/d3b19c37-9f14-43ad-88bc-2b4fcf038c75) ![Screenshot from 2024-03-16 06-22-18](https://github.com/guoyww/AnimateDiff/assets/289994/9ccf8e25-6413-401c-8156-843acbd322eb) https://youtu.be/vVaRhZXovbw Among the architecture redesign options mentioned, using efficient blocks, specifically depthwise separable convolutions, is probably the easiest to...

Traceback (most recent call last): File "/root/miniconda3/envs/animatediff/lib/python3.10/site-packages/gradio/routes.py", line 534, in predict output = await route_utils.call_process_api( File "/root/miniconda3/envs/animatediff/lib/python3.10/site-packages/gradio/route_utils.py", line 226, in call_process_api output = await app.get_blocks().process_api( File "/root/miniconda3/envs/animatediff/lib/python3.10/site-packages/gradio/blocks.py", line 1550, in...

Traceback (most recent call last): File "/data/AnimateDiff-main/train.py", line 495, in main(name=name, launcher=args.launcher, use_wandb=args.wandb, **config) File "/data/AnimateDiff-main/train.py", line 130, in main local_rank = init_dist(launcher=launcher) File "/data/AnimateDiff-main/train.py", line 48, in init_dist rank...

A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' Traceback (most recent call last): File "/app/AnimateDiff/app.py", line 327, in demo...

Why does unet_use_temporal_attention and unet_use_temporal_attention are always None or False? It seems not woking in temporal attention. Does anyone know about this? Thx!

Thanks to the author for his work! When will the training code of SparseCtrl be released?

(animatediff) PS F:\animediff\AnimateDiff> python -m scripts.animate --config configs/prompts/1-ToonYou.yaml C:\Users\dbodbo\miniconda3\envs\animatediff\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: Could not find module 'C:\Users\dbodbo\miniconda3\envs\animatediff\Lib\site-packages\torchvision\image.pyd' (or one of its dependencies). Try using the full...

Can you explain why encoder_hidden_state is used in the motion module? The motion module as expressed in the paper is a vanilla temporal attention, not cross-attention. ![image](https://github.com/guoyww/AnimateDiff/assets/35716657/7ed76732-c7b7-4597-83da-b48ff19b5724) https://github.com/guoyww/AnimateDiff/blob/cf80ddeb47b69cf0b16f225800de081d486d7f21/animatediff/models/unet_blocks.py#L411

I want to train animatediff, but I found that only the v1 version of the pre-trained model can be loaded in the current code?

Can you provide the supplementary material about the paper SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models?