stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Bug]: Regional Prompter attention mode broken (2024-03-08)
Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
Not sure how to troubleshoot this further. It was working two days ago. Ran Forge, this morning and there were some updates. I installed it again in a clean directory, but it has the same issue. Using Regional Prompter, the settings don't "stick" to the UI, which makes me think it isn't passing the right parameter, anymore. When you click "Generate," it gives this familiar error:
Traceback (most recent call last):
File "/opt/ui/forge-2024-03-08/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
This seems to be true whether I am trying to use columns or a mask - it just doesn't care. Latent couple doesn't give the same error, but fails to separate the subjects. I'm 100% certain I used Regional Prompter just two days ago, and maybe even yesterday morning.
Odd UI behavior: https://youtu.be/lmXAcwo71s0
Steps to reproduce the problem
See also the linked video ...
- Load up Regional Prompter.
- Create a 4-column layout (1,1,1,1).
- Select "columns" and "attention."
- Create template.
Attention and Columns will no longer be selected ...
What should have happened?
They should still be selected. And, my hunch, they should still pass parameters to Regional Prompter. :-)
What browsers do you use to access the UI ?
Mozilla Firefox, Google Chrome, Microsoft Edge
Sysinfo
Console logs
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
Already up to date.
################################################################
[1m[32mInstall script for stable-diffusion + Web UI
[1m[34mTested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.[0m
################################################################
################################################################
Running on [1m[32mzaqhack[0m user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
python venv already activate or run without venv: /opt/ui/forge-2024-03-08/venv
################################################################
################################################################
Accelerating launch.py...
################################################################
glibc version is 2.35
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
[2024-03-08 11:39:00,783] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --administrator --enable-insecure-extension-access --listen --port 7860 --theme dark --no-download-sd-model --no-hashing --allow-code --api --xformers --opt-channelslast --opt-split-attention --no-half --pin-shared-memory --cuda-malloc --cuda-stream
Using cudaMallocAsync backend.
Total VRAM 24257 MB, total RAM 128741 MB
xformers version: 0.0.23.post1
Set vram state to: NORMAL_VRAM
Always pin shared GPU memory
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
VAE dtype: torch.bfloat16
CUDA Stream Activated: True
Using xformers cross attention
ControlNet preprocessor location: /opt/ui/forge-2024-03-08/models/ControlNetPreprocessor
Loading weights [None] from /opt/ui/forge-2024-03-08/models/Stable-diffusion/0_sdxl/ponyDiffusionV6XL_v6StartWithThisOne.safetensors
2024-03-08 11:39:11,143 - ControlNet - [0;32mINFO[0m - ControlNet UI callback registered.
Running on local URL: http://0.0.0.0:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 11.2s (prepare environment: 1.8s, import torch: 3.8s, import gradio: 0.7s, setup paths: 0.6s, other imports: 0.4s, load scripts: 2.0s, create ui: 0.8s, gradio launch: 0.2s, add APIs: 0.7s).
model_type EPS
UNet ADM Dimension 2816
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 23750.842864990234
[Memory Management] Model Memory (MB) = 2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 20582.488166809082
Moving model(s) has taken 0.16 seconds
Model loaded in 4.5s (load weights from disk: 1.7s, forge load real models: 2.1s, calculate empty prompt: 0.6s).
1,1,1,1 0.3 Horizontal
Regional Prompter Active, Pos tokens : [57, 24, 32, 35, 65], Neg tokens : [15, 21, 27, 17, 11]
2024-03-08 11:39:44,357 - ControlNet - [0;32mINFO[0m - ControlNet Input Mode: InputMode.SIMPLE
2024-03-08 11:39:44,359 - ControlNet - [0;32mINFO[0m - Using preprocessor: None
2024-03-08 11:39:44,359 - ControlNet - [0;32mINFO[0m - preprocessor resolution = 1024
[] []
2024-03-08 11:39:45,079 - ControlNet - [0;32mINFO[0m - Current ControlNet ControlNetPatcher: /opt/ui/forge-2024-03-08/models/ControlNet/OpenPoseXL2.safetensors
To load target model AutoencoderKL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 21838.87897491455
[Memory Management] Model Memory (MB) = 159.55708122253418
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 20655.321893692017
Moving model(s) has taken 0.02 seconds
[LORA] Loaded /opt/_models/models/Lora/Styles/Smooth Anime Style LoRA XL.safetensors for SDXL-UNet with 722 keys at weight 0.75 (skipped 0 keys)
[LORA] Loaded /opt/_models/models/Lora/Styles/Smooth Anime Style LoRA XL.safetensors for SDXL-CLIP with 264 keys at weight 0.75 (skipped 0 keys)
To load target model SDXLClipModel
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) = 21681.987573623657
[Memory Management] Model Memory (MB) = 0.0
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 20657.987573623657
Moving model(s) has taken 0.51 seconds
2024-03-08 11:39:47,496 - ControlNet - [0;32mINFO[0m - ControlNet Method None patched.
To load target model SDXL
To load target model ControlNet
Begin to load 2 models
[Memory Management] Current Free GPU Memory (MB) = 21675.481714248657
[Memory Management] Model Memory (MB) = 4897.086494445801
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 15754.395219802856
[Memory Management] Current Free GPU Memory (MB) = 16778.385454177856
[Memory Management] Model Memory (MB) = 2386.120147705078
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 13368.265306472778
Moving model(s) has taken 0.87 seconds
hook_forward.<locals>.forward() got an unexpected keyword argument 'transformer_options'
Additional information
Typically accessed over the network via --listen ... Windows/Mac browser to Linux backend.
Can confirm. Having regional prompter active causes TypeError: 'NoneType' object is not iterable, in attention mode. Latent seems to work fine.
Same error for me...
I have the same problem. Regional Prompting extension attention mode completely stopped working and latent doesn't separate the subjects.
TypeError: 'NoneType' object is not iterable
I'm on "29be1da" and reverted a few commits back because it worked a few days ago and on "b9705c5" it works again. So it seems one of the last three commits broke this functionality. @lllyasviel
I have the same problem. Regional Prompting extension attention mode completely stopped working and latent doesn't separate the subjects.
TypeError: 'NoneType' object is not iterable
I'm on "29be1da" and reverted a few commits back because it worked a few days ago and on "b9705c5" it works again. So it seems one of the last three commits broke this functionality. @lllyasviel
Same here. Regional Prompter has stopped working as you described in txt2img, img2img, anywhere.
I disabled every extension (including built-in) except for Regional Prompter and still received the error. When I disabled RP, I did not get an error. It was working a couple days ago. Also, no error in A1111.
Someone opened an issue on RP repository: https://github.com/hako-mikan/sd-webui-regional-prompter/issues/307
The one time in a month that I wanna use and it broke lol, guess I'll roll back
In my webui forge, RP and LC doesn't work at all, even for txt2img
roll back to "b9705c5"
I wish I knew how to do this. I tried using a youtube tutorial and just got errors for my troubles.
roll back to "b9705c5"
I wish I knew how to do this. I tried using a youtube tutorial and just got errors for my troubles.
in a terminal window navigate to your root installation folder of WebUI Forge (...\stable-diffusion-webui-forge)
type git reset --hard b9705c5
and hit enter.
type git log -1
and hit enter.
check the first seven digits/letters next to commit. do they match the ones you typed in earlier? if yes, the rollback was successfully
Same problem regional prompter does not work anymore when I updated forge today.
roll back to "b9705c5"
I wish I knew how to do this. I tried using a youtube tutorial and just got errors for my troubles.
in a terminal window navigate to your root installation folder of WebUI Forge (...\stable-diffusion-webui-forge)
type
git reset --hard b9705c5
and hit enter.type
git log -1
and hit enter.check the first seven digits/letters next to commit. do they match the ones you typed in earlier? if yes, the rollback was successfully
Wow. That easy? Thanks you're the best.
Same error. Rolling back to b9705c5 commit fixed it. UPD: it break lora working for me... Needs fix for Forge
Does not work in txt2img either. Lets hope it will be fixed soon. Forge is great, it would be a pity to switch back to A1111.
Duplicate of https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/242
Same problem! Do something please!
Re-opening since I realized this is actually a different issue. Will be fixed when https://github.com/hako-mikan/sd-webui-regional-prompter/pull/308 is merged.
Upstream PR merged. Update the Regional Prompter extension and it will work now.
I believe I'm having this issue now, post update. Latent works, attention does not. All the same as above on Forge.
Console logs
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 6665.920343399048
[Memory Management] Model Memory (MB) = 4897.086494445801
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 744.8338489532471
Moving model(s) has taken 1.45 seconds
0%| | 0/30 [00:00<?, ?it/s]
Traceback (most recent call last):
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules_forge\forge_sampler.py", line 88, in forge_sample
denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 289, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\modules\model_base.py", line 90, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 867, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 55, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\attention.py", line 620, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\attention.py", line 447, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\util.py", line 194, in checkpoint
return func(*inputs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\ldm_patched\ldm\modules\attention.py", line 547, in _forward
n = self.attn2(n, context=context_attn2, value=value_attn2, transformer_options=extra_options)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\attention.py", line 429, in forward
opx = masksepcalc(px, conp, mask, True, 2)
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\extensions\sd-webui-regional-prompter\scripts\attention.py", line 309, in masksepcalc
context = contexts[:,tll[i][0] * TOKENSCON:tll[i][1] * TOKENSCON,:]
IndexError: list index out of range
list index out of range
*** Error completing request
*** Arguments: ('task(oxa205sa499e3ys)', <gradio.routes.Request object at 0x000002600D6BB3A0>, '1girl, blue shirt, BREAK, 1boy, red shirt,', 'score_5, score_4, negativeXL_D,', [], 30, 'Euler a', 1, 1, 7, 1216, 832, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, 102476424, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, '', 0.5, True, False, '', 'Lerp', False, False, 8, True, False, 16, 'Median cut', 'None', True, False, 16, 'Median cut', 'None', True, False, False, 128, False, None, 16, 'None', True, False, 'Mask', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False, 'Use BREAK to change chunks'], '0', '0', '0.4', {'image': array([[[255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255],
*** ...,
*** [255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255]],
***
*** [[255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255],
*** ...,
*** [255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255]],
***
*** [[255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255],
*** ...,
*** [255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255]],
***
*** ...,
***
*** [[255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255],
*** ...,
*** [255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255]],
***
*** [[255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255],
*** ...,
*** [255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255]],
***
*** [[255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255],
*** ...,
*** [255, 255, 255],
*** [255, 255, 255],
*** [255, 255, 255]]], dtype=uint8), 'mask': array([[[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** ...,
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]],
***
*** [[0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0],
*** ...,
*** [0, 0, 0],
*** [0, 0, 0],
*** [0, 0, 0]]], dtype=uint8)}, '0', '0', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '') {}
Traceback (most recent call last):
File "D:\AI\Stability Matrix\Data\Packages\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---