sd-forge-layerdiffuse icon indicating copy to clipboard operation
sd-forge-layerdiffuse copied to clipboard

The transparent image is not saved

Open DiegoRRR opened this issue 5 months ago • 4 comments

Two images are saved : one with a blur background and one with a checker background, none with a transparent background. :( I tried several models, 1.5 and XL. Example: 20240918233131-3203883256-2 5D artUniverse10 tmplf0x4rw6

I also checked in AppData\Local\Temp\gradio, there is only the version with a checker background.

There is no error in the output. There is a warning about onnxruntime and a warning about transformers but I don't think it's related.

Python 3.10.6 (main, Dec 22 2022, 15:39:53) [MSC v.1934 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-526-gc13b26ba
Commit hash: c13b26ba271bac327879d32f01307fc21a012321
Launching Web UI with arguments:
Total VRAM 12288 MB, total RAM 32677 MB
pytorch version: 2.3.1+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
D:\apps\stable-diffusion\Forge_2024\system\python\lib\site-packages\onnxruntime\
capi\onnxruntime_validation.py:26: UserWarning: Unsupported Windows version (7).
 ONNX Runtime supports Windows 10 and above, only.
  warnings.warn(
D:\apps\stable-diffusion\Forge_2024\system\python\lib\site-packages\transformers
\utils\hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and w
ill be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: D:\apps\stable-diffusion\Forge_2024\webui\mode
ls\ControlNetPreprocessor
2024-09-18 20:44:16,947 - ControlNet - INFO - ControlNet UI callback registered.

Model selected: {'checkpoint_info': {'filename': 'D:\\apps\\stable-diffusion\\Fo
rge_2024\\webui\\models\\Stable-diffusion\\1.5\\2.5D artUniverse10 .safetensors'
, 'hash': 'd37a18cd'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 33.2s (prepare environment: 4.8s, import torch: 16.2s, initialize
shared: 0.3s, other imports: 0.6s, load scripts: 5.2s, create ui: 4.3s, gradio l
aunch: 1.6s).
Loading Model: {'checkpoint_info': {'filename': 'D:\\apps\\stable-diffusion\\For
ge_2024\\webui\\models\\Stable-diffusion\\1.5\\2.5D artUniverse10 .safetensors',
 'hash': 'd37a18cd'}, 'additional_modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done
.
StateDict Keys: {'unet': 686, 'vae': 248, 'text_encoder': 197, 'ignore': 0}
D:\apps\stable-diffusion\Forge_2024\system\python\lib\site-packages\transformers
\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces`
was not set. It will be set to `True` by default. This behavior will be depracte
d in transformers v4.45, and will be then set to `False` by default. For more de
tails check this issue: https://github.com/huggingface/transformers/issues/31884

  warnings.warn(
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.flo
at16}
Model loaded in 17.4s (unload existing model: 0.2s, forge model load: 17.2s).
[Unload] Trying to free 1329.14 MB for cuda:0 with 0 models keep loaded ... Done
.
[Memory Management] Target: JointTextEncoder, Free GPU: 11141.40 MB, Model Requi
re: 234.72 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remain
ing: 9882.68 MB, All loaded to GPU.
Moving model(s) has taken 0.14 seconds
[Unload] Trying to free 1024.00 MB for cuda:0 with 1 models keep loaded ... Curr
ent free memory is 10460.36 MB ... Done.
[LayerDiffuse] LayerMethod.FG_ONLY_ATTN_SD15
[Unload] Trying to free 3155.23 MB for cuda:0 with 0 models keep loaded ... Curr
ent free memory is 10460.06 MB ... Done.
[Memory Management] Target: KModel, Free GPU: 10460.06 MB, Model Require: 1639.4
1 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 7796
.65 MB, All loaded to GPU.
Moving model(s) has taken 0.69 seconds
100%|██████████████████████████████████████████| 20/20 [00:17<00:00,  1.15it/s]
[Unload] Trying to free 1568.67 MB for cuda:0 with 0 models keep loaded ... Curr
ent free memory is 8347.70 MB ... Done.
[Memory Management] Target: IntegratedAutoencoderKL, Free GPU: 8347.70 MB, Model
 Require: 159.56 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB,
Remaining: 7164.14 MB, All loaded to GPU.
Moving model(s) has taken 0.18 seconds
[Unload] Trying to free 1282.13 MB for cuda:0 with 0 models keep loaded ... Curr
ent free memory is 8186.71 MB ... Done.
[Memory Management] Target: UNet1024, Free GPU: 8186.71 MB, Model Require: 198.5
6 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 6964
.15 MB, All loaded to GPU.
Moving model(s) has taken 0.14 seconds
100%|████████████████████████████████████████████| 8/8 [00:01<00:00,  6.08it/s]
Total progress: 100%|██████████████████████████| 20/20 [00:17<00:00,  1.18it/s]
Total progress: 100%|██████████████████████████| 20/20 [00:17<00:00,  1.33it/s]

Both LayerDiffuse and Forge are up to date. ( In another folder I keep the old Forge from last year with the old LayerDiffuse from april, with the same settings, these old two work perfectly and I do get a transparent image, except for img2img which was fixed recently, that is why I want the new versions. )

DiegoRRR avatar Sep 18 '24 21:09 DiegoRRR