'NoneType' object is not iterable。SD1.5 512X512
In sd1.5, 512X512 768X768 512X640 doesn't work either
On the contrary, LayerDiffuse uses uses the SDXL algorithm model, and the SD base model is 1.5, which is actually possible, but some effects are not ideal
My problem is exactly the same as yours. I used Ubuntu20.02 and my graphics card was rtx4090. I can also use the sdxl model, but using the 1.5 model will report the same error as you.
i get the same resulting error. though i'm not sure, if the reason is the same. i have no problems with resolutions, but rather with a specifi controlnet, that is generating the error output. if i don't use that, it won't give me an error. this happens with InsightFace+CLIP-H (IPAdapter) (independent of the specific model i'd use)
here's the cmd-outputs:
2024-03-17 14:35:13,224 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE
2024-03-17 14:35:13,233 - ControlNet - INFO - Using preprocessor: InsightFace+CLIP-H (IPAdapter)
2024-03-17 14:35:13,233 - ControlNet - INFO - preprocessor resolution = 512
2024-03-17 14:35:13,270 - ControlNet - INFO - Current ControlNet IPAdapterPatcher: E:\ai_gh_repos\sd.webui\webui\models\ControlNet\ip-adapter-faceid-portrait-v11_sd15.bin
Warning: field infotext in API payload not found in <modules.processing.StableDiffusionProcessingTxt2Img object at 0x00000275EF8B92D0>.
2024-03-17 14:35:13,654 - ControlNet - INFO - ControlNet Method InsightFace+CLIP-H (IPAdapter) patched.
[Layer Diffusion] LayerMethod.FG_ONLY_ATTN_SD15
Reuse 1 loaded models
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 6311.455324172974
[Memory Management] Model Memory (MB) = 178.406982421875
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 5109.048341751099
Moving model(s) has taken 0.36 seconds
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 37, in loop
task.work()
File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\sd_samplers_kdiffusion.py", line 251, in
*** [[201, 203, 226], *** [201, 203, 226], *** [200, 205, 227], *** ..., *** [ 55, 21, 20], *** [ 54, 20, 18], *** [ 53, 19, 17]],
*** [[201, 203, 226], *** [201, 203, 226], *** [201, 203, 226], *** ..., *** [ 54, 20, 19], *** [ 53, 19, 18], *** [ 54, 18, 18]],
*** ...,
*** [[ 74, 101, 148], *** [ 78, 102, 150], *** [ 81, 101, 151], *** ..., *** [ 45, 74, 118], *** [ 47, 74, 121], *** [ 47, 74, 121]],
*** [[ 74, 101, 148], *** [ 78, 102, 150], *** [ 81, 101, 151], *** ..., *** [ 46, 75, 119], *** [ 47, 74, 121], *** [ 47, 74, 121]],
*** [[ 72, 99, 146], *** [ 77, 101, 149], *** [ 80, 100, 150], *** ..., *** [ 48, 77, 121], *** [ 48, 75, 122], *** [ 48, 75, 122]]], dtype=uint8), 'mask': array([[[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],
*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],
*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],
*** ...,
*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],
*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],
*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', processor_res=512, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) TypeError: 'NoneType' object is not iterable
我的是4070ti super 遇到同样的问题,大模型选什么都可以,但是 LayerDiffuse 必须使用 SDXL的,使用SD1.5的会报'NoneType' object is not iterable的错误
have you guys solved this issue? could anyone share the solution please? appreciate it very much.
Any progression?
I have the same problem, is there any progress so far?
i have the same problem when i use ip-adapter
+1,same here,
same problem, no one gives a shit