stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Bug]: ksampler error
Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in
Steps to reproduce the problem
try to generate any photos using ksamplers getting error
What should have happened?
should create the photo sysinfo-2024-07-04-12-42.json
What browsers do you use to access the UI ?
No response
Sysinfo
Console logs
Already up to date.
venv "Z:\Stable Forge\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-278-gbfee03d8
Commit hash: bfee03d8d9415a925616f40ede030fe7a51cbcfd
CUDA 12.1
Launching Web UI with arguments:
Total VRAM 6144 MB, total RAM 15773 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : native
Hint: your device supports --pin-shared-memory for potential speed improvements.
Hint: your device supports --cuda-malloc for potential speed improvements.
Hint: your device supports --cuda-stream for potential speed improvements.
VAE dtype: torch.bfloat16
CUDA Stream Activated: False
2024-07-04 22:34:19.784026: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-07-04 22:34:20.406166: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Using pytorch cross attention
ControlNet preprocessor location: Z:\Stable Forge\stable-diffusion-webui-forge\models\ControlNetPreprocessor
CHv1.8.10: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.6.0, num models: 37
22:34:34 - ReActor - STATUS - Running v0.7.0-a2 on Device: CUDA
[text2prompt] Following databases are available:
all-mpnet-base-v2 : danbooru_strict
Loading weights [90bef92d4f] from Z:\Stable Forge\stable-diffusion-webui-forge\models\Stable-diffusion\babes_31.safetensors
model_type EPS
UNet ADM Dimension 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
2024-07-04 22:34:39,113 - ControlNet - INFO - ControlNet UI callback registered.
No Image data blocks found.
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 5102.1494140625
[Memory Management] Model Memory (MB) = 454.2076225280762
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 3623.941791534424
Moving model(s) has taken 0.26 seconds
CHv1.8.10: Set Proxy:
Model loaded in 4.1s (load weights from disk: 0.2s, forge load real models: 1.5s, load textual inversion embeddings: 1.7s, calculate empty prompt: 0.6s).
[text2prompt] Loading database with name "all-mpnet-base-v2 : danbooru_strict"...
[text2prompt] Database loaded
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 37.8s (prepare environment: 9.3s, import torch: 4.5s, import gradio: 1.0s, setup paths: 3.4s, other imports: 0.6s, list SD models: 0.8s, load scripts: 12.4s, create ui: 4.7s, gradio launch: 0.3s, app_started_callback: 0.6s).
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 4727.3720703125
[Memory Management] Model Memory (MB) = 1639.4137649536133
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 2063.9583053588867
Moving model(s) has taken 0.76 seconds
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "Z:\Stable Forge\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "Z:\Stable Forge\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 596, in sample_dpmpp_2m
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
File "Z:\Stable Forge\stable-diffusion-webui-forge\extensions\sd-webui-resharpen\scripts\resharpen.py", line 18, in hijack_callback
if not self.trajectory_enable:
AttributeError: 'KDiffusionSampler' object has no attribute 'trajectory_enable'
'KDiffusionSampler' object has no attribute 'trajectory_enable'
*** Error completing request
*** Arguments: ('task(h4al8m6lfhhlbhg)', <gradio.routes.Request object at 0x000001D0C0B17F70>, '', '', ['young russian'], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, False, False, 'base', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'Basic', '', 'Horizontal', 'None', 0.5, [['0.00:0.50', '0.00:1.00', '1.0'], ['0.50:1.00', '0.00:1.00', '1.0']], False, '(SDXL) Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, '', '', '', False, 3, 0, 'full', 0, 'middle', 0, '', False, 7, 3, 0, 'full', 0, -1.0, -1.0, False, False, 20, 4, 4, 0.4, 0.95, 2, 2, 0.4, 0.5, False, 1, False, False, 'uncond', 0.2, 0.8, 0.5, 0.1, 1, 0, 0, 0, True, False, '1.5', 0, False, 0.01, 0.5, -0.13, 0, 0, 0, 0, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, 0, 'Flat', 0, 'Flat', False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, None, [], [], [], [], '', '', '', '', False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, 'positive', 'comma', 0, False, False, 'start', '', False, False, 100.0, True, '', '', 0.0, '', 1.0, 0.1, '', 3000.0, True, 'mizuni-concept-v1', 1.0, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '') {}
Traceback (most recent call last):
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "Z:\Stable Forge\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "Z:\Stable Forge\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 596, in sample_dpmpp_2m
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
File "Z:\Stable Forge\stable-diffusion-webui-forge\extensions\sd-webui-resharpen\scripts\resharpen.py", line 18, in hijack_callback
if not self.trajectory_enable:
AttributeError: 'KDiffusionSampler' object has no attribute 'trajectory_enable'
'KDiffusionSampler' object has no attribute 'trajectory_enable'
*** Error completing request
*** Arguments: ('task(392kx17yix5b4m8)', <gradio.routes.Request object at 0x000001D0BE749A50>, 'man in the room\n', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, False, False, 'base', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'Basic', '', 'Horizontal', 'None', 0.5, [['0.00:0.50', '0.00:1.00', '1.0'], ['0.50:1.00', '0.00:1.00', '1.0']], False, '(SDXL) Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, '', '', '', False, 3, 0, 'full', 0, 'middle', 0, '', False, 7, 3, 0, 'full', 0, -1.0, -1.0, False, False, 20, 4, 4, 0.4, 0.95, 2, 2, 0.4, 0.5, False, 1, False, False, 'uncond', 0.2, 0.8, 0.5, 0.1, 1, 0, 0, 0, True, False, '1.5', 0, False, 0.01, 0.5, -0.13, 0, 0, 0, 0, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, 0, 'Flat', 0, 'Flat', False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, None, [], [], [], [], '', '', '', '', False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, 'positive', 'comma', 0, False, False, 'start', '', False, False, 100.0, True, '', '', 0.0, '', 1.0, 0.1, '', 3000.0, True, 'mizuni-concept-v1', 1.0, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '') {}
Traceback (most recent call last):
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "Z:\Stable Forge\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "Z:\Stable Forge\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 596, in sample_dpmpp_2m
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
File "Z:\Stable Forge\stable-diffusion-webui-forge\extensions\sd-webui-resharpen\scripts\resharpen.py", line 18, in hijack_callback
if not self.trajectory_enable:
AttributeError: 'KDiffusionSampler' object has no attribute 'trajectory_enable'
'KDiffusionSampler' object has no attribute 'trajectory_enable'
*** Error completing request
*** Arguments: ('task(qxmotl4tfae8qlq)', <gradio.routes.Request object at 0x000001D0C0B16200>, 'man in the room\n', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, False, False, 'base', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'Basic', '', 'Horizontal', 'None', 0.5, [['0.00:0.50', '0.00:1.00', '1.0'], ['0.50:1.00', '0.00:1.00', '1.0']], False, '(SDXL) Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, '', '', '', False, 3, 0, 'full', 0, 'middle', 0, '', False, 7, 3, 0, 'full', 0, -1.0, -1.0, False, False, 20, 4, 4, 0.4, 0.95, 2, 2, 0.4, 0.5, False, 1, False, False, 'uncond', 0.2, 0.8, 0.5, 0.1, 1, 0, 0, 0, True, False, '1.5', 0, False, 0.01, 0.5, -0.13, 0, 0, 0, 0, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, 0, 'Flat', 0, 'Flat', False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, None, [], [], [], [], '', '', '', '', False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, 'positive', 'comma', 0, False, False, 'start', '', False, False, 100.0, True, '', '', 0.0, '', 1.0, 0.1, '', 3000.0, True, 'mizuni-concept-v1', 1.0, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '') {}
Traceback (most recent call last):
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 7.43it/s]
To load target model AutoencoderKL█████████████████████████████████████████████████▋ | 19/20 [00:02<00:00, 7.18it/s]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 3014.5966796875
[Memory Management] Model Memory (MB) = 159.55708122253418
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 1831.0395984649658
Moving model(s) has taken 0.13 seconds
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00, 6.44it/s]
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\sd_samplers_lcm.py", line 75, in sample_lcm
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
File "Z:\Stable Forge\stable-diffusion-webui-forge\extensions\sd-webui-resharpen\scripts\resharpen.py", line 18, in hijack_callback
if not self.trajectory_enable:
AttributeError: 'LCMSampler' object has no attribute 'trajectory_enable'
'LCMSampler' object has no attribute 'trajectory_enable'
*** Error completing request
*** Arguments: ('task(qgdgk4yl2shofro)', <gradio.routes.Request object at 0x000001D0C0F7E4A0>, 'man in the room\n', '', [], 20, 'LCM', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, False, False, 'base', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'Basic', '', 'Horizontal', 'None', 0.5, [['0.00:0.50', '0.00:1.00', '1.0'], ['0.50:1.00', '0.00:1.00', '1.0']], False, '(SDXL) Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, '', '', '', False, 3, 0, 'full', 0, 'middle', 0, '', False, 7, 3, 0, 'full', 0, -1.0, -1.0, False, False, 20, 4, 4, 0.4, 0.95, 2, 2, 0.4, 0.5, False, 1, False, False, 'uncond', 0.2, 0.8, 0.5, 0.1, 1, 0, 0, 0, True, False, '1.5', 0, False, 0.01, 0.5, -0.13, 0, 0, 0, 0, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, 0, 'Flat', 0, 'Flat', False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, None, [], [], [], [], '', '', '', '', False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, 'positive', 'comma', 0, False, False, 'start', '', False, False, 100.0, True, '', '', 0.0, '', 1.0, 0.1, '', 3000.0, True, 'mizuni-concept-v1', 1.0, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '') {}
Traceback (most recent call last):
File "Z:\Stable Forge\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---
100%|██████████████████████████████████████████████████████████████████████████████████| 19/19 [00:03<00:00, 6.27it/s]
Total progress: 95%|██████████████████████████████████████████████████████████████▋ | 19/20 [00:02<00:00, 6.91it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 19/19 [00:02<00:00, 7.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████| 81/81 [00:00<00:00, 236.32it/s]
Memory cleanup has taken 0.76 seconds
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 4787.66552734375
[Memory Management] Model Memory (MB) = 1639.4137649536133
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 2124.2517623901367
Moving model(s) has taken 0.40 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 18/18 [00:13<00:00, 1.29it/s]
Memory cleanup has taken 0.75 seconds█████████████████████████████████████████████ | 37/40 [00:19<00:02, 1.37it/s]
Total progress: 92%|█████████████████████████████████████████████████████████████ | 37/40 [00:21<00:01, 1.72it/s]
Total progress: 92%|█████████████████████████████████████████████████████████████ | 37/40 [00:21<00:02, 1.37it/s]
Additional information
No response