Incorrect results when using img2img batch mode
Now I've tested again and can start generating, but the resulting image is only the first image. It's like the loop isn't working properly.
https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/f94bcae7b00f30a9ef0540277770e5acdbd12c72
Reuse 1 loaded models
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 6634.41796875
[Memory Management] Model Memory (MB) = 0.0
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 5610.41796875
Moving model(s) has taken 0.07 seconds
*** Error running before_process_init_images: D:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffuse\scripts\forge_layerdiffusion.py
Traceback (most recent call last):
File "D:\webui_forge_cu121_torch21\webui\modules\scripts.py", line 868, in before_process_init_images
script.before_process_init_images(p, pp, *script_args, **kwargs)
File "D:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffuse\scripts\forge_layerdiffusion.py", line 429, in before_process_init_images
latent_offset = vae_transparent_encoder.encode(image)
File "D:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffuse\lib_layerdiffusion\models.py", line 319, in encode
list_of_np_rgb_padded = [pad_rgb(x) for x in list_of_np_rgba_hwc_uint8]
File "D:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffuse\lib_layerdiffusion\models.py", line 319, in <listcomp>
list_of_np_rgb_padded = [pad_rgb(x) for x in list_of_np_rgba_hwc_uint8]
File "D:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffuse\lib_layerdiffusion\models.py", line 207, in pad_rgb
pyramid = build_alpha_pyramid(color=np_rgba_hwc[..., :3], alpha=np_rgba_hwc[..., 3:])
File "D:\webui_forge_cu121_torch21\webui\extensions\sd-forge-layerdiffuse\lib_layerdiffusion\models.py", line 190, in build_alpha_pyramid
current_premultiplied_color = color * alpha
ValueError: operands could not be broadcast together with shapes (512,512,3) (512,512,0)```
I am encountering this bug too. :( In img2img I set several pictures in the batch tab. When generating, the right picture appears in the preview but the saved picture is always the first one. The temporary picture is always the first one too. This only happens with LayerDiffuse enabled. Is this bug supposed to be fixed? My version is b1e66511 2024-08-31 and Forge doesn't find any update.
edit: Also it stays stuck in batch mode when I go back to simple img2img instead of batch, it keeps treating all the pictures from the batch instead of just the picture in img2img. And when I empty the batch to avoid this, I get this error:
File "D:\apps\stable-diffusion\Forge_2024\webui\modules_forge\main_thread.py",
line 30, in work
self.result = self.func(*self.args, **self.kwargs)
File "D:\apps\stable-diffusion\Forge_2024\webui\modules\img2img.py", line 234,
in img2img_function
assert isinstance(img2img_batch_upload, list) and img2img_batch_upload
AssertionError