sd-forge-layerdiffuse icon indicating copy to clipboard operation
sd-forge-layerdiffuse copied to clipboard

'NoneType' object is not iterable。SD1.5 512X512

Open yincangshiwei opened this issue 1 year ago • 9 comments

In sd1.5, 512X512 768X768 512X640 doesn't work either image image On the contrary, LayerDiffuse uses uses the SDXL algorithm model, and the SD base model is 1.5, which is actually possible, but some effects are not ideal image

yincangshiwei avatar Mar 14 '24 05:03 yincangshiwei

My problem is exactly the same as yours. I used Ubuntu20.02 and my graphics card was rtx4090. I can also use the sdxl model, but using the 1.5 model will report the same error as you.

image image

wangwenqiao666 avatar Mar 14 '24 09:03 wangwenqiao666

i get the same resulting error. though i'm not sure, if the reason is the same. i have no problems with resolutions, but rather with a specifi controlnet, that is generating the error output. if i don't use that, it won't give me an error. this happens with InsightFace+CLIP-H (IPAdapter) (independent of the specific model i'd use)

here's the cmd-outputs:


2024-03-17 14:35:13,224 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE 2024-03-17 14:35:13,233 - ControlNet - INFO - Using preprocessor: InsightFace+CLIP-H (IPAdapter) 2024-03-17 14:35:13,233 - ControlNet - INFO - preprocessor resolution = 512 2024-03-17 14:35:13,270 - ControlNet - INFO - Current ControlNet IPAdapterPatcher: E:\ai_gh_repos\sd.webui\webui\models\ControlNet\ip-adapter-faceid-portrait-v11_sd15.bin Warning: field infotext in API payload not found in <modules.processing.StableDiffusionProcessingTxt2Img object at 0x00000275EF8B92D0>. 2024-03-17 14:35:13,654 - ControlNet - INFO - ControlNet Method InsightFace+CLIP-H (IPAdapter) patched. [Layer Diffusion] LayerMethod.FG_ONLY_ATTN_SD15 Reuse 1 loaded models To load target model BaseModel Begin to load 1 model [Memory Management] Current Free GPU Memory (MB) = 6311.455324172974 [Memory Management] Model Memory (MB) = 178.406982421875 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 5109.048341751099 Moving model(s) has taken 0.36 seconds 0%| | 0/20 [00:00<?, ?it/s] Traceback (most recent call last): File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 37, in loop task.work() File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, **self.kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\txt2img.py", line 111, in txt2img_function processed = processing.process_images(p) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\processing.py", line 752, in process_images res = process_images_inner(p) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\processing.py", line 922, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\processing.py", line 1275, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\sd_samplers_common.py", line 263, in launch_sampling return func() File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\sd_samplers_kdiffusion.py", line 251, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self.call_impl(*args, **kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in call_impl return forward_call(*args, **kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\sd_samplers_cfg_denoiser.py", line 182, in forward denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params, File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules_forge\forge_sampler.py", line 88, in forge_sample denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\ldm_patched\modules\samplers.py", line 289, in sampling_function cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch output = model.apply_model(input_x, timestep, **c).chunk(batch_chunks) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\ldm_patched\modules\model_base.py", line 90, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 867, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 55, in forward_timestep_embed x = layer(x, context, transformer_options) File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\attention.py", line 620, in forward x = block(x, context=context[i], transformer_options=transformer_options) File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\attention.py", line 447, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\diffusionmodules\util.py", line 194, in checkpoint return func(*inputs) File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\attention.py", line 541, in _forward n = self.attn2.to_q(n) File "E:\ai_gh_repos\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1695, in getattr raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'") AttributeError: 'AttentionSharingUnit' object has no attribute 'to_q' 'AttentionSharingUnit' object has no attribute 'to_q' *** Error completing request *** Arguments: ('task(kam2coc2gv49fdh)', <gradio.routes.Request object at 0x00000275ECA7FA90>, 'face of a man', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, 704282901, False, -1, 0, 0, 0, False, 1, 0, False, 1, 0.0, 4, 512, 512, True, 'None', 'None', 0, True, '(SD1.5) Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, '', '', '', None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=True, module='InsightFace+CLIP-H (IPAdapter)', model='ip-adapter-faceid-portrait-v11_sd15 [53ef197c]', weight=1, image={'image': array([[[202, 204, 227], *** [203, 205, 228], *** [201, 206, 228], *** ..., *** [ 56, 22, 21], *** [ 55, 21, 19], *** [ 54, 20, 18]],


*** [[201, 203, 226], *** [201, 203, 226], *** [200, 205, 227], *** ..., *** [ 55, 21, 20], *** [ 54, 20, 18], *** [ 53, 19, 17]],


*** [[201, 203, 226], *** [201, 203, 226], *** [201, 203, 226], *** ..., *** [ 54, 20, 19], *** [ 53, 19, 18], *** [ 54, 18, 18]],


*** ...,


*** [[ 74, 101, 148], *** [ 78, 102, 150], *** [ 81, 101, 151], *** ..., *** [ 45, 74, 118], *** [ 47, 74, 121], *** [ 47, 74, 121]],


*** [[ 74, 101, 148], *** [ 78, 102, 150], *** [ 81, 101, 151], *** ..., *** [ 46, 75, 119], *** [ 47, 74, 121], *** [ 47, 74, 121]],


*** [[ 72, 99, 146], *** [ 77, 101, 149], *** [ 80, 100, 150], *** ..., *** [ 48, 77, 121], *** [ 48, 75, 122], *** [ 48, 75, 122]]], dtype=uint8), 'mask': array([[[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],


*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],


*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],


*** ...,


*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],


*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]],


*** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', processor_res=512, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "E:\ai_gh_repos\webui_forge_cu121_torch21\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) TypeError: 'NoneType' object is not iterable


dermesut avatar Mar 17 '24 13:03 dermesut

我的是4070ti super 遇到同样的问题,大模型选什么都可以,但是 LayerDiffuse 必须使用 SDXL的,使用SD1.5的会报'NoneType' object is not iterable的错误

cumtcx avatar Mar 23 '24 12:03 cumtcx

have you guys solved this issue? could anyone share the solution please? appreciate it very much.

zhucede avatar Apr 09 '24 08:04 zhucede

Any progression?

ExissNA avatar Apr 15 '24 00:04 ExissNA

I have the same problem, is there any progress so far?

franciszzj avatar Apr 23 '24 01:04 franciszzj

i have the same problem when i use ip-adapter

chaochao0 avatar Apr 26 '24 06:04 chaochao0

+1,same here,

Willber1995 avatar May 13 '24 08:05 Willber1995

same problem, no one gives a shit

wktra avatar Aug 17 '24 18:08 wktra