stable-diffusion-webui-forge icon indicating copy to clipboard operation
stable-diffusion-webui-forge copied to clipboard

[Bug]: Only Masked gives error "TypeError: 'NoneType' object is not iterable"

Open Lalimec opened this issue 1 year ago • 0 comments

Checklist

  • [X] The issue exists after disabling all extensions
  • [ ] The issue exists on a clean installation of webui
  • [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [X] The issue exists in the current version of the webui
  • [X] The issue has not been reported before recently
  • [ ] The issue has been reported before but has not been fixed yet

What happened?

it should give the output. the weird thing is that the issue is not the generation, it proceeds as it should. however at the last step webui fails to composite the final image, which also has another problems i face with soft inpainting (it super imposes the masked are no matter what i set, which results in ridiculous images most of the time) but might get into that in some other issue.

Steps to reproduce the problem

add an image to inpainting tab draw a mask select only masked and give some padding set your generation res generate and get your error

image

What should have happened?

webui should output a final composite of the cropped area as it should.

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2024-02-20-14-25.json

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.14v1.8.0rc-latest-184-g43c9e3b5
Commit hash: 43c9e3b5ce1642073c7a9684e36b45489eeb4a49
Launching Web UI with arguments: --api --ckpt-dir ./models/Stable-diffusion --hypernetwork-dir ./models/hypernetworks --embeddings-dir ./embeddings --lora-dir ./models/Lora
Total VRAM 24564 MB, total RAM 32539 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
*** "Disable all extensions" option was set, will only load built-in extensions ***
ControlNet preprocessor location: D:\_ai\stable-diffusion-webui\models\ControlNetPreprocessor
Loading weights [f0d4872d24] from D:\_ai\stable-diffusion-webui\models\Stable-diffusion\_general\realisticVisionV60B1_v51VAE-inpainting.safetensors
2024-02-20 17:27:11,914 - ControlNet - INFO - ControlNet UI callback registered.
model_type EPS
UNet ADM Dimension 0
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 28.1s (prepare environment: 6.4s, import torch: 7.9s, import gradio: 2.7s, setup paths: 4.0s, import ldm: 0.1s, initialize shared: 0.2s, other imports: 1.1s, load scripts: 3.7s, create ui: 0.8s, gradio launch: 0.5s, add APIs: 0.5s).
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])
loaded straight to GPU
To load target model BaseModel
Begin to load 1 model
To load target model SD1ClipModel
Begin to load 1 model
Moving model(s) has taken 0.18 seconds
Model loaded in 4.6s (load weights from disk: 0.5s, forge load real models: 3.0s, load VAE: 0.3s, calculate empty prompt: 0.7s).

img2img: cinematic photo  <lora:lora_Nagme_rv:1.2> ohwx woman RAW photo, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 . 35mm photograph, film, bokeh, professional, 4k, highly detailed
Upscale script freed memory successfully.
tiled upscale: 100%|████████████████| 12/12 [00:01<00:00,  9.62it/s]
To load target model AutoencoderKL
Begin to load 1 model
To load target model SD1ClipModel
Begin to load 1 model
unload clone 1
Moving model(s) has taken 0.20 seconds
To load target model BaseModel
Begin to load 1 model
unload clone 2
Moving model(s) has taken 0.97 seconds
100%|███████████████████████████████| 15/15 [00:04<00:00,  3.44it/s]
Traceback (most recent call last):██| 15/15 [00:03<00:00,  3.82it/s]
  File "D:\_ai\stable-diffusion-webui\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "D:\_ai\stable-diffusion-webui\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\_ai\stable-diffusion-webui\modules\img2img.py", line 236, in img2img_function
    processed = process_images(p)
  File "D:\_ai\stable-diffusion-webui\modules\processing.py", line 750, in process_images
    res = process_images_inner(p)
  File "D:\_ai\stable-diffusion-webui\modules\processing.py", line 1011, in process_images_inner
    original_denoised_image = uncrop(original_denoised_image, (overlay_image.width, overlay_image.height), p.paste_to)
AttributeError: 'NoneType' object has no attribute 'width'
'NoneType' object has no attribute 'width'
*** Error completing request
*** Arguments: ('task(23fehhpob1m1yuy)', 2, 'cinematic photo  <lora:lora_Nagme_rv:1.2> ohwx woman RAW photo, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 . 35mm photograph, film, bokeh, professional, 4k, highly detailed', 'drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly, (Naked, nude, topless, nsfw:1.4) (3d, render, cgi, doll, painting, fake, 3d modeling:1.4), (worst quality, low quality:1.4), child, deformed, malformed, bad hands, bad fingers, bad eyes, bad teeth, long body, blurry, duplicated, cloned, duplicate body parts, disfigured, extra limbs, fused fingers, extra fingers, twisted, distorted, malformed hands, malformed fingers, mutated hands and fingers, conjoined, missing limbs, bad anatomy, bad proportions, logo, watermark, text, lowres, mutated, mutilated, artifacts, gross, ugly', [], None, None, {'image': <PIL.Image.Image image mode=RGBA size=928x1232 at 0x17524764C40>, 'mask': <PIL.Image.Image image mode=RGB size=928x1232 at 0x175241A21A0>}, None, None, None, None, 28, 'DPM++ SDE Karras', 4, 0, 1, 1, 1, 5, 1.5, 0.53, 0.0, 1192, 896, 1, 0, 1, 132, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x00000175A3B202B0>, 0, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, 1570298599, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\_ai\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---

Additional information

No response

Lalimec avatar Feb 20 '24 14:02 Lalimec