stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
TypeError: 'NoneType' object is not iterable ... and other things
Hi, thanks for the heads up, but I'm having a rather serious issue that I can't seem to get past. For context: Windows 11; Nvidia A2000 12GB VRAM; 64GB RAM; 5 TB SSD; CUDA 12.3 Triton 2.0 windows python 3 wheel xformers; newest stable or newest pre-release (both present with identical issue I'm about to go into) torch 2.2.0+cu121
I have been having trouble with 'forge' since it came out. I've gotten a couple very wonderfully-fast generations out of it--so, so cool, I'm very grateful that you work so diligently to make this so much more optimized. It's like when a family member spends all day cooking when you're outside playing, you go back in and it's like "woah"--that is to say, you do this work with love! Thank you!
But can you PUHLEEEEEZE tell me what the heck I'm doing wrong? Never have had this sort of issue before using forge (as in, this is really my first REAL problem.)
Also: what's with token merging being "under construction"? :( anyway this is what I'm getting at:
Launching Web UI with arguments: --no-hashing --enable-insecure-extension-access --disable-safe-unpickle --xformers
Total VRAM 12282 MB, total RAM 65174 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA RTX A2000 12GB : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
*** Cannot import xformers
Traceback (most recent call last):
File "Q:\SD\stable-diffusion-webui-forge\modules\sd_hijack_optimizations.py", line 160, in
*** Error loading script: preprocessor_marigold.py
Traceback (most recent call last):
File "Q:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\utils\import_utils.py", line 710, in get_module
return importlib.import_module("." + module_name, self.name)
File "Q:\Anaconda3\envs\forge\lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "Q:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\utils\import_utils.py", line 710, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "Q:\Anaconda3\envs\forge\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "Q:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 22, in <module>
from ..loaders import UNet2DConditionLoadersMixin
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "Q:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\utils\import_utils.py", line 700, in __getattr__
module = self._get_module(self._class_to_module[name])
File "Q:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\utils\import_utils.py", line 712, in _get_module
raise RuntimeError(
RuntimeError: Failed to import diffusers.loaders.unet because of the following error (look up to see its traceback):
JITFunction.__init__() got an unexpected keyword argument 'debug'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "Q:\SD\stable-diffusion-webui-forge\modules\scripts.py", line 541, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "Q:\SD\stable-diffusion-webui-forge\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "Q:\SD\stable-diffusion-webui-forge\extensions-builtin\forge_preprocessor_marigold\scripts\preprocessor_marigold.py", line 10, in <module>
from marigold.model.marigold_pipeline import MarigoldPipeline
File "Q:\SD\stable-diffusion-webui-forge\extensions-builtin\forge_preprocessor_marigold\marigold\model\marigold_pipeline.py", line 9, in <module>
from diffusers import (
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "Q:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\utils\import_utils.py", line 701, in __getattr__
value = getattr(module, name)
File "Q:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\utils\import_utils.py", line 700, in __getattr__
module = self._get_module(self._class_to_module[name])
File "Q:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\utils\import_utils.py", line 712, in _get_module
raise RuntimeError(
RuntimeError: Failed to import diffusers.models.unet_2d_condition because of the following error (look up to see its traceback):
Failed to import diffusers.loaders.unet because of the following error (look up to see its traceback):
JITFunction.__init__() got an unexpected keyword argument 'debug'
Originally posted by @LastTargaryen in https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/166#discussioncomment-8426325
This bug also happens when i use original controlnet extension (not builtin - i removed this one) and when i use ipadapter plus to copy style then it throws out this non iterable error. But in builtin controlnet ipadapter, when i want to use ipadapter 15 plus - it gives me black screen, no matter what preprocessor.
I can confirm I sometimes get 'NoneType' object is not iterable a random, needing a restart of the webui, usually when playing around with controlnet but I have not narrowed down the exact cause
Self Attention Guidance triggers it pretty much 100% of the time for me now. It was working at some point but in the middle of a batch I started getting the NoneType errors and even after a restart which reloaded my settings it persisted until I disabled Self Attention Guidance.
I have this error when i try to gen images that I previously genned before with a1111
Same issue when i try to use ControlNet with SDXL models
I get this issue as well.
I use base xl model with control net tile_resample and an xl model.
The issue description in OP contains no reference to a NoneType
error. Including it only in the title with no traceback is not helpful for developers. Please create a single issue for each problem you're coming across, not multiple.
Same issue when i try to use ControlNet with SDXL models
Yeah, I have this Issue only with ControlNet and SDXL
Getting this also now - both w/ Animatediff and Modelmixer. Breaks the session.
I don't have time to create a full thread on this, but here's an example of my current console. Perhaps related to the memory changes in recent commits (But that's just a guess) @lllyasviel
[Memory Management] Current Free GPU Memory (MB) = 21161.912109375
[Memory Management] Model Memory (MB) = 1903.1046981811523
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 18234.807411193848
Moving model(s) has taken 0.27 seconds
Model loaded in 2.0s (forge solving config: 0.2s, forge load real models: 1.4s, calculate empty prompt: 0.3s).
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 13 images in a total of 13 batches.
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 19391.89306640625
[Memory Management] Model Memory (MB) = 4897.086494445801
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 13470.80657196045
Moving model(s) has taken 0.73 seconds
0%| | 0/5 [00:00<?, ?it/s]
Traceback (most recent call last):
File "D:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "D:\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "D:\stable-diffusion-webui-forge\modules\processing.py", line 751, in process_images
res = process_images_inner(p)
File "D:\stable-diffusion-webui-forge\modules\processing.py", line 920, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\stable-diffusion-webui-forge\modules\processing.py", line 1272, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "D:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
File "D:\stable-diffusion-webui-forge\modules_forge\forge_sampler.py", line 83, in forge_sample
denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
File "D:\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 289, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
File "D:\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "D:\stable-diffusion-webui-forge\ldm_patched\modules\model_base.py", line 90, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 849, in forward
assert (y is not None) == (
AssertionError: must specify y if and only if the model is class-conditional
must specify y if and only if the model is class-conditional
*** Error completing request
*** Arguments: ('task(a9ev94s5sxhmgzf)', <gradio.routes.Request object at 0x000002374F023010>, 'topnotch artstyle, filmstyle, __prompt__ ', '', [], 5, 'DPM++ 2M Karras', 13, 1, 2.5, 1048, 944, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x000002374F021BA0>, False, 0.6, 0.9, 0.25, 1, True, False, True, 'SDXL\\sdxl_lightning_4step.safetensors [e0d996ee00]', 'None', 3, '', {'save_settings': ['fp16', 'prune', 'safetensors'], 'calc_settings': ['GPU']}, True, True, False, 'SDXL\\2024-02-19 - OT - New-Topnotch Kosmo2 Captioned.safetensors [b8663dd046]', 'SDXL\\2024-02-20 - OT - Shotdeck-3k on SDXL Base.safetensors [18491229de]', 'None', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, True, True, True, [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, '', '', '', 'Normal', 'Normal', 'Normal', ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "D:\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
Rolled back a bit - always get it w/ AnimateDiff right now...
Anyway, I'll try to do a new thread in the morning if I can narrow it down to something that seems universal. Very late for me here now.
/Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
2024-02-25 23:43:14,248 - AnimateDiff - INFO - Injecting LCM to UI.
Loading weights [e0d996ee00] from E:\Stable Diffusion Checkpoints\SDXL\sdxl_lightning_4step.safetensors
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
Loading VAE weights specified in settings: D:\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 23007.99609375
[Memory Management] Model Memory (MB) = 2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 19839.641395568848
Moving model(s) has taken 0.38 seconds
Model loaded in 3.6s (load weights from disk: 0.1s, forge load real models: 2.5s, load VAE: 0.1s, calculate empty prompt: 0.8s).
2024-02-25 23:43:18,632 - AnimateDiff - INFO - Hacking i2i-batch.
2024-02-25 23:43:18,747 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 34.3s (prepare environment: 14.4s, import torch: 3.9s, import gradio: 0.8s, setup paths: 0.8s, other imports: 0.6s, list SD models: 0.4s, load scripts: 6.4s, refresh VAE: 0.2s, create ui: 5.7s, gradio launch: 0.4s, app_started_callback: 0.4s).
*** Error running process: D:\stable-diffusion-webui-forge\extensions\sd-dynamic-prompts\scripts\dynamic_prompting.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui-forge\modules\scripts.py", line 803, in process
script.process(p, *script_args)
File "D:\stable-diffusion-webui-forge\extensions\sd-dynamic-prompts\sd_dynamic_prompts\dynamic_prompting.py", line 481, in process
all_prompts, all_negative_prompts = generate_prompts(
File "D:\stable-diffusion-webui-forge\extensions\sd-dynamic-prompts\sd_dynamic_prompts\helpers.py", line 93, in generate_prompts
all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [""]
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\dynamicprompts\generators\randomprompt.py", line 71, in generate
prompts.append(str(next(iter(gen))))
StopIteration
---
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 21160.61767578125
[Memory Management] Model Memory (MB) = 4897.086494445801
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 15239.53118133545
Moving model(s) has taken 0.99 seconds
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 18.64it/s]
To load target model AutoencoderKL███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 24.65it/s]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 16154.17919921875
[Memory Management] Model Memory (MB) = 159.55708122253418
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 14970.622117996216
Moving model(s) has taken 0.03 seconds
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 17.77it/s]
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 11 images in a total of 11 batches.██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 24.65it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 7.31it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 8.10it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 8.24it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 8.14it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 7.96it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 8.13it/s]
Total progress: 55%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 18/33 [00:04<00:04, 3.73it/s]
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 11 images in a total of 11 batches.██████████████████████████████████████████▎ | 18/33 [00:04<00:03, 4.33it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 6.46it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 6.58it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 7.03it/s]
Total progress: 27%|██████████████████████████████████████████████████████████████▍ | 9/33 [00:02<00:07, 3.37it/s]
2024-02-25 23:45:18,377 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE | 9/33 [00:02<00:05, 4.12it/s]
2024-02-25 23:45:18,378 - ControlNet - INFO - Using preprocessor: canny
2024-02-25 23:45:18,379 - ControlNet - INFO - preprocessor resolution = 512
2024-02-25 23:45:18,711 - ControlNet - INFO - Current ControlNet ControlNetPatcher: D:\stable-diffusion-webui-forge\models\ControlNet\control-lora-canny-rank128.safetensors
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 11 images in a total of 11 batches.
2024-02-25 23:45:18,763 - ControlNet - INFO - ControlNet Method canny patched.
To load target model SDXL
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) = 15975.154296875
[Memory Management] Model Memory (MB) = 0.0
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 14951.154296875
Moving model(s) has taken 0.02 seconds
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 6.18it/s]
2024-02-25 23:45:20,853 - ControlNet - INFO - ControlNet Method canny patched. | 2/33 [00:00<00:01, 16.05it/s]
To load target model SDXL
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) = 15974.09814453125
[Memory Management] Model Memory (MB) = 0.0
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 14950.09814453125
Moving model(s) has taken 0.02 seconds
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 6.76it/s]
2024-02-25 23:45:21,927 - ControlNet - INFO - ControlNet Method canny patched. | 6/33 [00:01<00:05, 4.64it/s]
To load target model SDXL
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) = 15973.259765625
[Memory Management] Model Memory (MB) = 0.0
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 14949.259765625
Moving model(s) has taken 0.02 seconds
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 6.45it/s]
2024-02-25 23:45:23,014 - ControlNet - INFO - ControlNet Method canny patched.█████▍ | 9/33 [00:02<00:06, 3.77it/s]
To load target model SDXL
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) = 15972.42138671875
[Memory Management] Model Memory (MB) = 0.0
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 14948.42138671875
Moving model(s) has taken 0.02 seconds
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 6.74it/s]
Total progress: 36%|██████████████████████████████████████████████████████████████████████████████████▉ | 12/33 [00:04<00:07, 2.91it/s]
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 11 images in a total of 11 batches.▉ | 12/33 [00:04<00:05, 3.63it/s]
To load target model SDXL
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) = 15971.80078125
[Memory Management] Model Memory (MB) = 0.0
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 14947.80078125
Moving model(s) has taken 0.02 seconds
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 7.73it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 8.04it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 8.12it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 7.96it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 7.56it/s]
Total progress: 45%|███████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 15/33 [00:04<00:04, 3.67it/s]
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001e520a81500] moov atom not found████████████████████████████████████████████████████████████▋ | 15/33 [00:04<00:03, 4.50it/s]
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001e520a81500] moov atom not found
2024-02-25 23:46:13,855 - AnimateDiff - INFO - AnimateDiff process start.
2024-02-25 23:46:13,855 - AnimateDiff - INFO - Loading motion module animatediffMotion_sdxlV10Beta.ckpt from D:\stable-diffusion-webui-forge\extensions\sd-webui-animatediff\model\animatediffMotion_sdxlV10Beta.ckpt
2024-02-25 23:46:14,134 - AnimateDiff - INFO - Guessed animatediffMotion_sdxlV10Beta.ckpt architecture: MotionModuleType.AnimateDiffXL
2024-02-25 23:46:14,795 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2024-02-25 23:46:14,930 - AnimateDiff - INFO - Injecting motion module animatediffMotion_sdxlV10Beta.ckpt into SDXL UNet input blocks.
2024-02-25 23:46:14,930 - AnimateDiff - INFO - Injecting motion module animatediffMotion_sdxlV10Beta.ckpt into SDXL UNet output blocks.
2024-02-25 23:46:14,932 - AnimateDiff - INFO - Setting DDIM alpha.
2024-02-25 23:46:14,940 - AnimateDiff - INFO - Injection finished.
2024-02-25 23:46:14,946 - AnimateDiff - INFO - Hacking CFGDenoiser forward function.
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 17215 images in a total of 11 batches.
Memory cleanup has taken 0.55 seconds
0%| | 0/3 [00:00<?, ?it/s]
Traceback (most recent call last):
File "D:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "D:\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "D:\stable-diffusion-webui-forge\modules\processing.py", line 751, in process_images
res = process_images_inner(p)
File "D:\stable-diffusion-webui-forge\modules\processing.py", line 920, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\stable-diffusion-webui-forge\modules\processing.py", line 1272, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "D:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 174, in mm_cfg_forward
if sd_samplers_common.apply_refiner(self):
TypeError: apply_refiner() missing 1 required positional argument: 'x'
apply_refiner() missing 1 required positional argument: 'x'
*** Error completing request
*** Arguments: ('task(g31sfmvrutr2ir3)', <gradio.routes.Request object at 0x000001E6C86D20E0>, '__prompt__ ', '', [], 3, 'DPM++ 2M Karras', 11, 1, 1.5, 1000, 912, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001E6C864C970>, False, 0.6, 0.9, 0.25, 1, True, False, False, 'SDXL\\sdxl_lightning_4step.safetensors [e0d996ee00]', 'None', 3, '', {'save_settings': ['fp16', 'prune', 'safetensors'], 'calc_settings': ['GPU', 'fastrebasin']}, True, False, False, 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, True, True, True, [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, '', '', '', 'Normal', 'Normal', 'Normal', ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='canny', model='control-lora-canny-rank128 [c910cde9]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=100, threshold_b=200, guidance_start=0, guidance_end=0.57, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "D:\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
Am catching this error randomly when using animatediff with ip-adapter. it'll happen seemingly at random from my pov. No settings changed, Generations worked great and then randomly it just throws this error and the whole UI needs restarting
I too am getting this error when using integrated controlnet and SDXL models.
I too am getting this error when using integrated controlnet and SDXL models.
Not sure if this we'll work in that case, but try refreshing the checkpoint list in the quickbar (top left) and see if that lets you generate. It works as a temporary workaround for my situation (get that error using the Model Mixer merging extension).
Hoping for a proper fix.
same here.. ipadapter.. that is sad that creators of controlnet cannot get this properly done
I too am getting this error when using integrated controlnet and SDXL models.
Not sure if this we'll work in that case, but try refreshing the checkpoint list in the quickbar (top left) and see if that lets you generate. It works as a temporary workaround for my situation (get that error using the Model Mixer merging extension).
Hoping for a proper fix.
Thanks. It didn't help, but I appreciate the suggestion.
same here.. ipadapter.. that is sad that creators of controlnet cannot get this properly done
I try not to brow beat the devs, they are working for free afterall.
I did notice this issue is closed, perhaps they are not watching anymore.
OK, I found solution - I was ussing different preprocessor for IP adapter.. and Illiasveil (sorry for misformulation of name) published recently proper list of what preprocessor for what IP adapter model.. once you have those two matching eatch other it works.. so I would suggest double check if preprocessors are matching models in case of such errors. it solved mine
I opened another ticket with my console logs and it appears I am dumb of ass. Even thought I downloaded the SDXL Controlnet models, the UI defaulted to the SD15 ones and I didn't catch it. Try making sure you have SDXL Controlnet models selected.
yeah... often those errors are caused by users, but would be nice if platform would be just able to catch those obvious mistakes and just inform in some human way :D
I had the same stuff happening when is used ControlNet in "Img2Img" instead of "Txt2Img". in txt2img its not giving you that error then (you have to try around if you havent like me used "depht" for example) when you have already the right composition, position whatever - then you can drag it there... All errors i´ve gotten like that i was shamefully pushing my hand into my face knowing the problem was me. Im using "Forge" btw.
Same with "Multiple Inputs" which is missing in forge... its even written that its deactivated on purpose... you have to open many units and put 1 pic after another in a new unit and have the same result but much worse... dunno what went through his head doing it like that.... (in setting btw. searching there for "controlnet" you can make more then 3 "units" for more pics for faceid for example)
somehow everything seems a bit unintuitive for me.
Trying to use gif2gif, TypeError: 'NoneType' object is not iterable
+1
Self Attention Guidance
Same
Another one!
I got this error to go away by turning off ADetailer.
After changing the sampling method, the output can be successfully performed.
In my case I was using sd15 controlnet model by mistake, I changed to sdxl controlnet model and the error disappeared. https://huggingface.co/lllyasviel/sd_control_collection/tree/main?_fsi=BRszZhff
I often encounter this problem when I enable HyperTile, if I turn it off it works (SDwebforge)
I get this randomly every few image generations, regardless of controlnet usage.