stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Bug]: AnimateDiff not working in forge
Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [x] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
Txt2img in not working in forge when using AnimateDiff, but it is working on webui main branch.
Steps to reproduce the problem
- Activate Animatediff 2. Click generate
What should have happened?
Webui should generate an animation.
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
Console logs
Already up to date.
venv "C:\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.14v1.8.0rc-latest-184-g43c9e3b5
Commit hash: 43c9e3b5ce1642073c7a9684e36b45489eeb4a49
Launching Web UI with arguments: --ckpt-dir C:/stable-diffusion-webui//models/Stable-diffusion --hypernetwork-dir C:/stable-diffusion-webui//models/hypernetworks --embeddings-dir C:/stable-diffusion-webui//embeddings --lora-dir C:/stable-diffusion-webui//models/Lora
Total VRAM 24576 MB, total RAM 65461 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
ControlNet preprocessor location: C:\stable-diffusion-webui-forge\models\ControlNetPreprocessor
2024-02-19 17:43:57,076 - AnimateDiff - INFO - Injecting LCM to UI.
Loading weights [bff4610d23] from C:\stable-diffusion-webui\models\Stable-diffusion\1.5\epicrealism_naturalSin.safetensors
2024-02-19 17:43:57,232 - AnimateDiff - INFO - Hacking i2i-batch.
2024-02-19 17:43:57,279 - ControlNet - INFO - ControlNet UI callback registered.
model_type EPS
UNet ADM Dimension 0
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 10.8s (prepare environment: 2.4s, import torch: 3.7s, import gradio: 0.8s, setup paths: 0.8s, initialize shared: 0.1s, other imports: 0.6s, load scripts: 1.6s, create ui: 0.7s, gradio launch: 0.2s).
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['model_ema.decay', 'model_ema.num_updates'])
To load target model SD1ClipModel
Begin to load 1 model
Moving model(s) has taken 0.18 seconds
Model loaded in 2.9s (load weights from disk: 0.3s, forge load real models: 2.0s, load VAE: 0.2s, calculate empty prompt: 0.3s).
2024-02-19 17:44:42,065 - AnimateDiff - INFO - AnimateDiff process start.
2024-02-19 17:44:42,065 - AnimateDiff - INFO - Loading motion module mm_sd_v15_v2.ckpt from C:\stable-diffusion-webui\extensions\sd-webui-animatediff\model\mm_sd_v15_v2.ckpt
2024-02-19 17:44:42,707 - AnimateDiff - INFO - Guessed mm_sd_v15_v2.ckpt architecture: MotionModuleType.AnimateDiffV2
2024-02-19 17:44:44,780 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2024-02-19 17:44:45,230 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet middle block.
2024-02-19 17:44:45,230 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet input blocks.
2024-02-19 17:44:45,230 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet output blocks.
2024-02-19 17:44:45,231 - AnimateDiff - INFO - Setting DDIM alpha.
2024-02-19 17:44:45,238 - AnimateDiff - INFO - Injection finished.
2024-02-19 17:44:45,238 - AnimateDiff - INFO - Hacking LoRA module to support motion LoRA
2024-02-19 17:44:45,246 - AnimateDiff - INFO - Hacking CFGDenoiser forward function.
To load target model BaseModel
Begin to load 1 model
Moving model(s) has taken 0.33 seconds
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "C:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "C:\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "C:\stable-diffusion-webui-forge\modules\processing.py", line 750, in process_images
res = process_images_inner(p)
File "C:\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\stable-diffusion-webui-forge\modules\processing.py", line 1276, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "C:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui-forge\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 174, in mm_cfg_forward
if sd_samplers_common.apply_refiner(self):
TypeError: apply_refiner() missing 1 required positional argument: 'x'
apply_refiner() missing 1 required positional argument: 'x'
*** Error completing request
*** Arguments: ('task(6obr9t8coh4003a)', <gradio.routes.Request object at 0x000001961F0E9DB0>, 'waterfall', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001961F0EA4A0>, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---
Additional information
No response
You'll note the highlighted section of the README for this repo.
I installed animatediff from the extensions / available tab in forge webui.. but it works if you switch to animatediff (forge branch).. cd extensions/sd-webui-animatediff and git checkout forge/master. No animatelcm yet thou.
i add a problem with it too, when to the extension folder, put the "sd-webui-animatediff" in a zip to save it (just in case), then i deleted the original folder after that git clone https://github.com/continue-revolution/sd-forge-animatediff.git
you'll get a folder named "sd-forge-animatediff" rename it in "sd-webui-animatediff"
watch this issue too : https://github.com/continue-revolution/sd-forge-animatediff/issues/4
I still have problems with motions lora though:
2024-02-22 01:48:55,518 - AnimateDiff - WARNING - Missing keys <All keys matched successfully> lora key not loaded down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.processor.to_k_lora.down.weight lora key not loaded down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.processor.to_k_lora.up.weight lora key not loaded down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.processor.to_out_lora.down.weight lora key not loaded down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.processor.to_out_lora.up.weight lora key not loaded down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.processor.to_q_lora.down.weight ....
Prompt:
instagram photo, close-up portrait of 18 y.o swedish woman in dress, beautiful face, makeup, night city street, bokeh, motion blur, epiCPhoto, lora:epiCRealismHelper:1, lora:LCM_LoRA_Weights_SD15:1, lora:mm_sd15_v2_lora_TiltUp:1
Neg
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation, epiCPhoto-neg
https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/154378947/e2e61f66-6eed-42d7-9240-69bd0c5da593
If you are using sd-webui-animatediff: do git checkout forge/master in your terminal, inside stable-diffusion-webui-forge/extensions/sd-webui-animatediff
If you are using sd-forge-animatediff: you don't need to do anything about it.
Motion LoRA is currently not available in Forge, because Forge has a completely different LoRA system and I honestly don't like it. I need to figure out the best way to implement it.
Please send issue to my repository, not here. I was extremely depress during the past few weeks and I was not able to deal with any issue. I will try to address all issues this weekend.
Lora is not available, so keywords are needed to explain the movement of the camera and content.
https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/74442467/c63f44c2-7600-4ce0-9b24-0149c6035ffb