TypeError: 'NoneType' object is not iterable
A1111 forge newest version. When generate,cmd bord shows this:
token_merging_ratio = 0.2 [Layer Diffusion] LayerMethod.FG_ONLY_ATTN To load target model SDXL Begin to load 1 model Reuse 1 loaded models [Memory Management] Current Free GPU Memory (MB) = 20344.2802734375 [Memory Management] Model Memory (MB) = 4210.9375 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 15109.3427734375 Moving model(s) has taken 0.71 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 41/41 [00:07<00:00, 5.58it/s] To load target model AutoencoderKL5it/s] Begin to load 1 model Reuse 1 loaded models [Memory Management] Current Free GPU Memory (MB) = 16109.3330078125 [Memory Management] Model Memory (MB) = 0.0 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 15085.3330078125 Moving model(s) has taken 0.01 seconds 0%| | 0/8 [00:00<?, ?it/s] Traceback (most recent call last): File "H:\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 37, in loop task.work() File "H:\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, **self.kwargs) File "H:\webui_forge_cu121_torch21\webui\modules\txt2img.py", line 111, in txt2img_function processed = processing.process_images(p) File "H:\webui_forge_cu121_torch21\webui\modules\processing.py", line 752, in process_images res = process_images_inner(p) File "H:\webui_forge_cu121_torch21\webui\modules\processing.py", line 936, in process_images_inner x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True) File "H:\webui_forge_cu121_torch21\webui\modules\processing.py", line 638, in decode_latent_batch sample = decode_first_stage(model, batch[i:i + 1])[0] File "H:\webui_forge_cu121_torch21\webui\modules\sd_samplers_common.py", line 74, in decode_first_stage return samples_to_images_tensor(x, approx_index, model) File "H:\webui_forge_cu121_torch21\webui\modules\sd_samplers_common.py", line 57, in samples_to_images_tensor x_sample = model.decode_first_stage(sample) File "H:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "H:\webui_forge_cu121_torch21\webui\modules_forge\forge_loader.py", line 239, in patched_decode_first_stage sample = sd_model.forge_objects.vae.decode(sample).movedim(-1, 1) * 2.0 - 1.0 File "H:\webui_forge_cu121_torch21\webui\ldm_patched\modules\sd.py", line 288, in decode return wrapper(self.decode_inner, samples_in) File "H:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "H:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffusion\lib_layerdiffusion\models.py", line 249, in wrapper y = self.estimate_augmented(pixel, latent) File "H:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "H:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffusion\lib_layerdiffusion\models.py", line 224, in estimate_augmented eps = self.estimate_single_pass(feed_pixel, feed_latent).clip(0, 1) File "H:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "H:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffusion\lib_layerdiffusion\models.py", line 202, in estimate_single_pass y = self.model.model(pixel, latent) File "H:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "H:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "H:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffusion\lib_layerdiffusion\models.py", line 174, in forward sample = upsample_block(sample, res_samples, emb) File "H:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "H:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "H:\webui_forge_cu121_torch21\system\python\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 2181, in forward hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 32 but got size 31 for tensor number 1 in the list. Sizes of tensors must match except in dimension 1. Expected size 32 but got size 31 for tensor number 1 in the list. *** Error completing request *** Arguments: ('task(3sjzin41rxzjll2)', <gradio.routes.Request object at 0x000001F098EE0850>, 'score_9,score_8_up,score_7_up,best quality,masterpiece,4k,uncensored,prefect lighting,anime BREAK\nlora:KakudateKarinPonyXL:1,kkba,halo,very long hair,gradient hair BREAK ', 'source_comic,source_furry,source_pony,sketch,painting,monochrome,jpeg artifacts,extra digit,fewer digits,unaestheticXL2v10,', [], 41, 'Euler a', 1, 1, 7, 984, 552, False, 0.26, 2, '4x-AnimeSharp', 6, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, False, False, 'base', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'FredZhang7/anime-anything-promptgen-v2', '', True, 'Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "H:\webui_forge_cu121_torch21\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) TypeError: 'NoneType' object is not iterable
In issue, i noticed the same error 'NoneType' about Higher batch size. And that error can still generate images,mine just nothing but this.
This is for Forge not A1111
This is for Forge not A1111
I mean in forge,it shows this error. Installing path shows the detail information.
Hey I had a very similar problem like this in Forge, realized it was because my resolutions didn't match certain sizes, try generating at 896x1152
Hey I had a very similar problem like this in Forge, realized it was because my resolutions didn't match certain sizes, try generating at 896x1152
Not working,and become more strange.
I often have this error in forge when using batch size > 1 or when using fooocus inpainting. Don't know if it's correlated.
I often have this error in forge when using batch size > 1 or when using fooocus inpainting. Don't know if it's correlated.
Ass you can see, all defaut settings, batch size 1, im so confused
I had the same issue.
In my environment, an error occurs for most sizes.
success
512×512
896×896
failure
944×944
912×912
920×920
1104×1104
The error is also below.
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 36 but got size 35 for tensor number 1 in the list. Sizes of tensors must match except in dimension 1. Expected size 36 but got size 35 for tensor number 1 in the list.
I put the log before the error.
The log is below.
I understood that it was a problem with the size of the tensor, but I was unable to analyze it further.
I'll contact you if I find out anything else.
The following sizes were successful.
success
256×256
512×512
768×768
896×896
1024×1024
1280×1280
896×1280
1408×1408
If I make the image size a multiple of 128, it works fine.
The following sizes were successful. success 256×256 512×512 768×768 896×896 1024×1024 1280×1280 896×1280 1408×1408
If I make the image size a multiple of 128, it works fine.
Thanks,much appreciate, I found resolution was one of the reasons. If add one more lora,or add embeddings,such as bad prompt,transparent layer will not work,and images will turn into strange color.
For me, this was due to me having "batch size" above 1.
it was resolution for me but when i tried 1536x1536 worked
The following sizes were successful. success 256×256 512×512 768×768 896×896 1024×1024 1280×1280 896×1280 1408×1408
If I make the image size a multiple of 128, it works fine.
The following sizes were successful. success 256×256 512×512 768×768 896×896 1024×1024 1280×1280 896×1280 1408×1408 If I make the image size a multiple of 128, it works fine.
If the multiple of 128 in the resolution is a standard, why is this fact not in the README?
