stable-diffusion-webui-forge icon indicating copy to clipboard operation
stable-diffusion-webui-forge copied to clipboard

[Bug]: Images generated using ANY Deniose Strength are always saved with a Deniose Strength of 0.3 in the Infotext Parameters

Open abline11 opened this issue 10 months ago • 3 comments

Checklist

  • [X] The issue exists after disabling all extensions
  • [X] The issue exists on a clean installation of webui
  • [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [X] The issue exists in the current version of the webui
  • [X] The issue has not been reported before recently
  • [ ] The issue has been reported before but has not been fixed yet

What happened?

All images are created correctly and show the Deniose Strength used in the filename if saved to show the Deniose Strength (e.g. using a filename pattern of [seed]-[steps]-[cfg]-[denoising]-[model_name]-[sampler]-[prompt_spaces]). However, the Deniose Strength saved in the Infotext Parameters is always set to 0.3.

So the output image is fine, but if at some point in the future you go back to an image file and use 'PNG Info' to view the Infotext Parameters and use that to copy the back the parameters (i.e. use 'Send to txt2img') it is always copied back into the generation parameters as a Deniose Strength of 0.3.

It took me a while to realise that the reason I could never reproduce the exact same image was due to this bug. My work around is to look at the filename pattern itself to find the correct Denoising Strength and reapply it manually before generation, but it would be nice if this bug could be fixed.

Deniose Infotext Save Error 02 to 03

Steps to reproduce the problem

As explained above, just generate any image and the Infotext will save the Deniose Stength as 0.3 regardless of the actual Deniose Strength used for the generation.

What should have happened?

It should have correctly saved the Denoise Strength used during generation rather than always applying a Denoise Strength of 0.3 to the saved Infotext.

What browsers do you use to access the UI ?

Microsoft Edge

Sysinfo

sysinfo-2024-04-04-22-15.json

Console logs

Already up to date.
venv "C:\AI\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --cuda-stream --pin-shared-memory
Total VRAM 8192 MB, total RAM 32439 MB
Set vram state to: NORMAL_VRAM
Always pin shared GPU memory
Device: cuda:0 NVIDIA GeForce RTX 3070 Ti Laptop GPU : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype: torch.bfloat16
CUDA Stream Activated:  True
Using pytorch cross attention
ControlNet preprocessor location: C:\AI\stable-diffusion-webui-forge\models\ControlNetPreprocessor
CHv1.8.3: Get Custom Model Folder
[-] ADetailer initialized. version: 24.3.5, num models: 10
CivitAI Browser+: Aria2 RPC started
Loading weights [2d5af23726] from C:\AI\stable-diffusion-webui-forge\models\Stable-diffusion\SDXL\realismEngineSDXL_v30VAE.safetensors
2024-04-04 23:25:53,617 - ControlNet - INFO - ControlNet UI callback registered.
model_type EPS
UNet ADM Dimension 2816
CHv1.8.3: Set Proxy:
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 24.0s (prepare environment: 9.9s, import torch: 4.0s, import gradio: 0.8s, setup paths: 2.5s, initialize shared: 0.2s, other imports: 0.5s, load scripts: 4.5s, create ui: 1.0s, gradio launch: 0.5s).
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  7086.5810546875
[Memory Management] Model Memory (MB) =  2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  3918.2263565063477
Moving model(s) has taken 0.50 seconds
Model loaded in 6.2s (load weights from disk: 0.4s, forge load real models: 5.0s, calculate empty prompt: 0.7s).
                                  [LORA] Loaded C:\AI\stable-diffusion-webui-forge\models\Lora\SDXL\mmmnita_lora_XL_1024V40.safetensors for SDXL-UNet with 722 keys at weight 0.65 (skipped 0 keys)
[LORA] Loaded C:\AI\stable-diffusion-webui-forge\models\Lora\SDXL\mmmnita_lora_XL_1024V40.safetensors for SDXL-CLIP with 264 keys at weight 0.65 (skipped 0 keys)
To load target model SDXLClipModel
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) =  5224.359375
[Memory Management] Model Memory (MB) =  0.0
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  4200.359375
Moving model(s) has taken 1.27 seconds
token_merging_ratio = 0.5
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  7033.88427734375
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  1112.7977828979492
Moving model(s) has taken 2.11 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:43<00:00,  2.17s/it]
To load target model AutoencoderKL███████████████████████████████                      | 20/30 [00:48<00:22,  2.25s/it]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6986.06494140625
[Memory Management] Model Memory (MB) =  159.55708122253418
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  5802.507860183716
Moving model(s) has taken 0.22 seconds
Cleanup minimal inference memory.
tiled upscale: 100%|███████████████████████████████████████████████████████████████████| 35/35 [00:42<00:00,  1.22s/it]
token_merging_ratio = 0.5
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6957.6416015625
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  1036.5551071166992
Moving model(s) has taken 1.36 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:36<00:00,  3.67s/it]
To load target model AutoencoderKL█████████████████████████████████████████████████████| 30/30 [02:12<00:00,  4.37s/it]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6938.60546875
[Memory Management] Model Memory (MB) =  159.55708122253418
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  5755.048387527466
Moving model(s) has taken 0.21 seconds
Cleanup minimal inference memory.
tiled upscale: 100%|███████████████████████████████████████████████████████████████████| 54/54 [00:18<00:00,  2.93it/s]
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
mediapipe: 1 detected.
ScuNET: 100%|██████████████████████████████████████████████████████████████████████████| 16/16 [00:01<00:00,  8.58it/s]
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6806.751953125
[Memory Management] Model Memory (MB) =  2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  3638.3972549438477
Moving model(s) has taken 0.47 seconds
token_merging_ratio = 0.5
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  5662.25048828125
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  -258.8360061645508
[Memory Management] Requested ASYNC Preserved Memory (MB) =  3567.8849906921387
[Memory Management] Parameters Loaded to ASYNC Stream (MB) =  1329.1650390625
[Memory Management] Parameters Loaded to GPU (MB) =  3567.8833084106445
Moving model(s) has taken 4.55 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 13/13 [01:08<00:00,  5.29s/it]
To load target model AutoencoderKL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6937.43994140625
[Memory Management] Model Memory (MB) =  159.55708122253418
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  5753.882860183716
Moving model(s) has taken 0.58 seconds
mediapipe: 1 detected.
ScuNET: 100%|████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00,  8.93it/s]
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6774.42529296875
[Memory Management] Model Memory (MB) =  2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  3606.0705947875977
Moving model(s) has taken 0.48 seconds
token_merging_ratio = 0.5
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  5690.3291015625
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  -230.75739288330078
[Memory Management] Requested ASYNC Preserved Memory (MB) =  3589.4839239120483
[Memory Management] Parameters Loaded to ASYNC Stream (MB) =  1307.6751708984375
[Memory Management] Parameters Loaded to GPU (MB) =  3589.373176574707
Moving model(s) has taken 3.44 seconds
100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:35<00:00,  5.11s/it]
To load target model AutoencoderKL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6904.2197265625
[Memory Management] Model Memory (MB) =  159.55708122253418
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  5720.662645339966
Moving model(s) has taken 0.66 seconds

0: 640x448 (no detections), 120.9ms
Speed: 4.9ms preprocess, 120.9ms inference, 2.2ms postprocess per image at shape (1, 3, 640, 448)
[-] ADetailer: nothing detected on image 1 with 3rd settings.
Total progress: 100%|██████████████████████████████████████████████████████████████████| 30/30 [04:43<00:00,  9.46s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 30/30 [04:43<00:00,  4.37s/it]

Additional information

No response

abline11 avatar Apr 04 '24 22:04 abline11

Mine always says .4

ArnorWing avatar Apr 09 '24 12:04 ArnorWing

Some hard wired bug. Should be simple for someone to fix. Sent from my iPhoneOn 9 Apr 2024, at 13:28, ArnorWing @.***> wrote: Mine always says .4

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>

abline11 avatar Apr 09 '24 12:04 abline11

If you have Adetailer enable, it is a bug with Adetailer. https://github.com/Bing-su/adetailer/issues/552

jhlchu avatar Apr 10 '24 09:04 jhlchu