stable-diffusion-webui-forge icon indicating copy to clipboard operation
stable-diffusion-webui-forge copied to clipboard

flux1-dev generate with last step a monocolored image.

Open killerciao opened this issue 1 year ago • 4 comments

While the other models work just fine (flux1-dev-fp8/flux1-dev-bnb-nf4-v2) generating images with flux1-dev result in a random color image. While it generate the image i can see normal results, but after finishing generating the image get filled with random colors. this is my console log:

Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-323-g72ab92f8
Commit hash: 72ab92f83e5a9e193726313c6d88ab435a61fb59
F:\IA\Packages\stable-diffusion-webui-forge\extensions-builtin\forge_legacy_preprocessors\install.py:2: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  import pkg_resources
F:\IA\Packages\stable-diffusion-webui-forge\extensions-builtin\sd_forge_controlnet\install.py:2: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  import pkg_resources
Launching Web UI with arguments: --gradio-allowed-path 'F:\IA\Images'
Total VRAM 24564 MB, total RAM 65362 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: F:\IA\Packages\stable-diffusion-webui-forge\models\ControlNetPreprocessor
2024-08-18 10:20:37,524 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'F:\\IA\\Packages\\stable-diffusion-webui-forge\\models\\Stable-diffusion\\flux1-dev.safetensors', 'hash': 'b04b3ba1'}, 'additional_modules': ['F:\\IA\\Packages\\stable-diffusion-webui-forge\\models\\text_encoder\\Flux\\clip_l.safetensors', 'F:\\IA\\Packages\\stable-diffusion-webui-forge\\models\\text_encoder\\Flux\\t5xxl_fp16.safetensors', 'F:\\IA\\Packages\\stable-diffusion-webui-forge\\models\\VAE\\diffusion_pytorch_model.safetensors'], 'unet_storage_dtype': None}
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 8.9s (prepare environment: 1.7s, import torch: 3.4s, initialize shared: 0.2s, other imports: 0.5s, load scripts: 0.9s, create ui: 1.5s, gradio launch: 0.6s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': True}
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': True}
Loading Model: {'checkpoint_info': {'filename': 'F:\\IA\\Packages\\stable-diffusion-webui-forge\\models\\Stable-diffusion\\flux1-dev.safetensors', 'hash': 'b04b3ba1'}, 'additional_modules': ['F:\\IA\\Packages\\stable-diffusion-webui-forge\\models\\text_encoder\\Flux\\clip_l.safetensors', 'F:\\IA\\Packages\\stable-diffusion-webui-forge\\models\\text_encoder\\Flux\\t5xxl_fp16.safetensors', 'F:\\IA\\Packages\\stable-diffusion-webui-forge\\models\\VAE\\diffusion_pytorch_model.safetensors'], 'unet_storage_dtype': None}
[Unload] Trying to free 953674316406250018963456.00 MB for cuda:0 with 0 models keep loaded ...
StateDict Keys: {'transformer': 780, 'vae': 244, 'text_encoder': 196, 'text_encoder_2': 220, 'ignore': 0}
Using Default T5 Data Type: torch.float16
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
IntegratedAutoencoderKL Missing: ['encoder.down.0.block.0.norm1.weight', 'encoder.down.0.block.0.norm1.bias', 'encoder.down.0.block.0.conv1.weight', 'encoder.down.0.block.0.conv1.bias', 'encoder.down.0.block.0.norm2.weight', 'encoder.down.0.block.0.norm2.bias', 'encoder.down.0.block.0.conv2.weight', 'encoder.down.0.block.0.conv2.bias', 'encoder.down.0.block.1.norm1.weight', 'encoder.down.0.block.1.norm1.bias', 'encoder.down.0.block.1.conv1.weight', 'encoder.down.0.block.1.conv1.bias', 'encoder.down.0.block.1.norm2.weight', 'encoder.down.0.block.1.norm2.bias', 'encoder.down.0.block.1.conv2.weight', 'encoder.down.0.block.1.conv2.bias', 'encoder.down.0.downsample.conv.weight', 'encoder.down.0.downsample.conv.bias', 'encoder.down.1.block.0.norm1.weight', 'encoder.down.1.block.0.norm1.bias', 'encoder.down.1.block.0.conv1.weight', 'encoder.down.1.block.0.conv1.bias', 'encoder.down.1.block.0.norm2.weight', 'encoder.down.1.block.0.norm2.bias', 'encoder.down.1.block.0.conv2.weight', 'encoder.down.1.block.0.conv2.bias', 'encoder.down.1.block.0.nin_shortcut.weight', 'encoder.down.1.block.0.nin_shortcut.bias', 'encoder.down.1.block.1.norm1.weight', 'encoder.down.1.block.1.norm1.bias', 'encoder.down.1.block.1.conv1.weight', 'encoder.down.1.block.1.conv1.bias', 'encoder.down.1.block.1.norm2.weight', 'encoder.down.1.block.1.norm2.bias', 'encoder.down.1.block.1.conv2.weight', 'encoder.down.1.block.1.conv2.bias', 'encoder.down.1.downsample.conv.weight', 'encoder.down.1.downsample.conv.bias', 'encoder.down.2.block.0.norm1.weight', 'encoder.down.2.block.0.norm1.bias', 'encoder.down.2.block.0.conv1.weight', 'encoder.down.2.block.0.conv1.bias', 'encoder.down.2.block.0.norm2.weight', 'encoder.down.2.block.0.norm2.bias', 'encoder.down.2.block.0.conv2.weight', 'encoder.down.2.block.0.conv2.bias', 'encoder.down.2.block.0.nin_shortcut.weight', 'encoder.down.2.block.0.nin_shortcut.bias', 'encoder.down.2.block.1.norm1.weight', 'encoder.down.2.block.1.norm1.bias', 'encoder.down.2.block.1.conv1.weight', 'encoder.down.2.block.1.conv1.bias', 'encoder.down.2.block.1.norm2.weight', 'encoder.down.2.block.1.norm2.bias', 'encoder.down.2.block.1.conv2.weight', 'encoder.down.2.block.1.conv2.bias', 'encoder.down.2.downsample.conv.weight', 'encoder.down.2.downsample.conv.bias', 'encoder.down.3.block.0.norm1.weight', 'encoder.down.3.block.0.norm1.bias', 'encoder.down.3.block.0.conv1.weight', 'encoder.down.3.block.0.conv1.bias', 'encoder.down.3.block.0.norm2.weight', 'encoder.down.3.block.0.norm2.bias', 'encoder.down.3.block.0.conv2.weight', 'encoder.down.3.block.0.conv2.bias', 'encoder.down.3.block.1.norm1.weight', 'encoder.down.3.block.1.norm1.bias', 'encoder.down.3.block.1.conv1.weight', 'encoder.down.3.block.1.conv1.bias', 'encoder.down.3.block.1.norm2.weight', 'encoder.down.3.block.1.norm2.bias', 'encoder.down.3.block.1.conv2.weight', 'encoder.down.3.block.1.conv2.bias', 'encoder.mid.block_1.norm1.weight', 'encoder.mid.block_1.norm1.bias', 'encoder.mid.block_1.conv1.weight', 'encoder.mid.block_1.conv1.bias', 'encoder.mid.block_1.norm2.weight', 'encoder.mid.block_1.norm2.bias', 'encoder.mid.block_1.conv2.weight', 'encoder.mid.block_1.conv2.bias', 'encoder.mid.attn_1.norm.weight', 'encoder.mid.attn_1.norm.bias', 'encoder.mid.attn_1.q.weight', 'encoder.mid.attn_1.q.bias', 'encoder.mid.attn_1.k.weight', 'encoder.mid.attn_1.k.bias', 'encoder.mid.attn_1.v.weight', 'encoder.mid.attn_1.v.bias', 'encoder.mid.attn_1.proj_out.weight', 'encoder.mid.attn_1.proj_out.bias', 'encoder.mid.block_2.norm1.weight', 'encoder.mid.block_2.norm1.bias', 'encoder.mid.block_2.conv1.weight', 'encoder.mid.block_2.conv1.bias', 'encoder.mid.block_2.norm2.weight', 'encoder.mid.block_2.norm2.bias', 'encoder.mid.block_2.conv2.weight', 'encoder.mid.block_2.conv2.bias', 'encoder.norm_out.weight', 'encoder.norm_out.bias', 'decoder.mid.block_1.norm1.weight', 'decoder.mid.block_1.norm1.bias', 'decoder.mid.block_1.conv1.weight', 'decoder.mid.block_1.conv1.bias', 'decoder.mid.block_1.norm2.weight', 'decoder.mid.block_1.norm2.bias', 'decoder.mid.block_1.conv2.weight', 'decoder.mid.block_1.conv2.bias', 'decoder.mid.attn_1.norm.weight', 'decoder.mid.attn_1.norm.bias', 'decoder.mid.attn_1.q.weight', 'decoder.mid.attn_1.q.bias', 'decoder.mid.attn_1.k.weight', 'decoder.mid.attn_1.k.bias', 'decoder.mid.attn_1.v.weight', 'decoder.mid.attn_1.v.bias', 'decoder.mid.attn_1.proj_out.weight', 'decoder.mid.attn_1.proj_out.bias', 'decoder.mid.block_2.norm1.weight', 'decoder.mid.block_2.norm1.bias', 'decoder.mid.block_2.conv1.weight', 'decoder.mid.block_2.conv1.bias', 'decoder.mid.block_2.norm2.weight', 'decoder.mid.block_2.norm2.bias', 'decoder.mid.block_2.conv2.weight', 'decoder.mid.block_2.conv2.bias', 'decoder.up.0.block.0.norm1.weight', 'decoder.up.0.block.0.norm1.bias', 'decoder.up.0.block.0.conv1.weight', 'decoder.up.0.block.0.conv1.bias', 'decoder.up.0.block.0.norm2.weight', 'decoder.up.0.block.0.norm2.bias', 'decoder.up.0.block.0.conv2.weight', 'decoder.up.0.block.0.conv2.bias', 'decoder.up.0.block.0.nin_shortcut.weight', 'decoder.up.0.block.0.nin_shortcut.bias', 'decoder.up.0.block.1.norm1.weight', 'decoder.up.0.block.1.norm1.bias', 'decoder.up.0.block.1.conv1.weight', 'decoder.up.0.block.1.conv1.bias', 'decoder.up.0.block.1.norm2.weight', 'decoder.up.0.block.1.norm2.bias', 'decoder.up.0.block.1.conv2.weight', 'decoder.up.0.block.1.conv2.bias', 'decoder.up.0.block.2.norm1.weight', 'decoder.up.0.block.2.norm1.bias', 'decoder.up.0.block.2.conv1.weight', 'decoder.up.0.block.2.conv1.bias', 'decoder.up.0.block.2.norm2.weight', 'decoder.up.0.block.2.norm2.bias', 'decoder.up.0.block.2.conv2.weight', 'decoder.up.0.block.2.conv2.bias', 'decoder.up.1.block.0.norm1.weight', 'decoder.up.1.block.0.norm1.bias', 'decoder.up.1.block.0.conv1.weight', 'decoder.up.1.block.0.conv1.bias', 'decoder.up.1.block.0.norm2.weight', 'decoder.up.1.block.0.norm2.bias', 'decoder.up.1.block.0.conv2.weight', 'decoder.up.1.block.0.conv2.bias', 'decoder.up.1.block.0.nin_shortcut.weight', 'decoder.up.1.block.0.nin_shortcut.bias', 'decoder.up.1.block.1.norm1.weight', 'decoder.up.1.block.1.norm1.bias', 'decoder.up.1.block.1.conv1.weight', 'decoder.up.1.block.1.conv1.bias', 'decoder.up.1.block.1.norm2.weight', 'decoder.up.1.block.1.norm2.bias', 'decoder.up.1.block.1.conv2.weight', 'decoder.up.1.block.1.conv2.bias', 'decoder.up.1.block.2.norm1.weight', 'decoder.up.1.block.2.norm1.bias', 'decoder.up.1.block.2.conv1.weight', 'decoder.up.1.block.2.conv1.bias', 'decoder.up.1.block.2.norm2.weight', 'decoder.up.1.block.2.norm2.bias', 'decoder.up.1.block.2.conv2.weight', 'decoder.up.1.block.2.conv2.bias', 'decoder.up.1.upsample.conv.weight', 'decoder.up.1.upsample.conv.bias', 'decoder.up.2.block.0.norm1.weight', 'decoder.up.2.block.0.norm1.bias', 'decoder.up.2.block.0.conv1.weight', 'decoder.up.2.block.0.conv1.bias', 'decoder.up.2.block.0.norm2.weight', 'decoder.up.2.block.0.norm2.bias', 'decoder.up.2.block.0.conv2.weight', 'decoder.up.2.block.0.conv2.bias', 'decoder.up.2.block.1.norm1.weight', 'decoder.up.2.block.1.norm1.bias', 'decoder.up.2.block.1.conv1.weight', 'decoder.up.2.block.1.conv1.bias', 'decoder.up.2.block.1.norm2.weight', 'decoder.up.2.block.1.norm2.bias', 'decoder.up.2.block.1.conv2.weight', 'decoder.up.2.block.1.conv2.bias', 'decoder.up.2.block.2.norm1.weight', 'decoder.up.2.block.2.norm1.bias', 'decoder.up.2.block.2.conv1.weight', 'decoder.up.2.block.2.conv1.bias', 'decoder.up.2.block.2.norm2.weight', 'decoder.up.2.block.2.norm2.bias', 'decoder.up.2.block.2.conv2.weight', 'decoder.up.2.block.2.conv2.bias', 'decoder.up.2.upsample.conv.weight', 'decoder.up.2.upsample.conv.bias', 'decoder.up.3.block.0.norm1.weight', 'decoder.up.3.block.0.norm1.bias', 'decoder.up.3.block.0.conv1.weight', 'decoder.up.3.block.0.conv1.bias', 'decoder.up.3.block.0.norm2.weight', 'decoder.up.3.block.0.norm2.bias', 'decoder.up.3.block.0.conv2.weight', 'decoder.up.3.block.0.conv2.bias', 'decoder.up.3.block.1.norm1.weight', 'decoder.up.3.block.1.norm1.bias', 'decoder.up.3.block.1.conv1.weight', 'decoder.up.3.block.1.conv1.bias', 'decoder.up.3.block.1.norm2.weight', 'decoder.up.3.block.1.norm2.bias', 'decoder.up.3.block.1.conv2.weight', 'decoder.up.3.block.1.conv2.bias', 'decoder.up.3.block.2.norm1.weight', 'decoder.up.3.block.2.norm1.bias', 'decoder.up.3.block.2.conv1.weight', 'decoder.up.3.block.2.conv1.bias', 'decoder.up.3.block.2.norm2.weight', 'decoder.up.3.block.2.norm2.bias', 'decoder.up.3.block.2.conv2.weight', 'decoder.up.3.block.2.conv2.bias', 'decoder.up.3.upsample.conv.weight', 'decoder.up.3.upsample.conv.bias', 'decoder.norm_out.weight', 'decoder.norm_out.bias']
IntegratedAutoencoderKL Unexpected: ['encoder.conv_norm_out.bias', 'encoder.conv_norm_out.weight', 'encoder.down_blocks.0.downsamplers.0.conv.bias', 'encoder.down_blocks.0.downsamplers.0.conv.weight', 'encoder.down_blocks.0.resnets.0.conv1.bias', 'encoder.down_blocks.0.resnets.0.conv1.weight', 'encoder.down_blocks.0.resnets.0.conv2.bias', 'encoder.down_blocks.0.resnets.0.conv2.weight', 'encoder.down_blocks.0.resnets.0.norm1.bias', 'encoder.down_blocks.0.resnets.0.norm1.weight', 'encoder.down_blocks.0.resnets.0.norm2.bias', 'encoder.down_blocks.0.resnets.0.norm2.weight', 'encoder.down_blocks.0.resnets.1.conv1.bias', 'encoder.down_blocks.0.resnets.1.conv1.weight', 'encoder.down_blocks.0.resnets.1.conv2.bias', 'encoder.down_blocks.0.resnets.1.conv2.weight', 'encoder.down_blocks.0.resnets.1.norm1.bias', 'encoder.down_blocks.0.resnets.1.norm1.weight', 'encoder.down_blocks.0.resnets.1.norm2.bias', 'encoder.down_blocks.0.resnets.1.norm2.weight', 'encoder.down_blocks.1.downsamplers.0.conv.bias', 'encoder.down_blocks.1.downsamplers.0.conv.weight', 'encoder.down_blocks.1.resnets.0.conv1.bias', 'encoder.down_blocks.1.resnets.0.conv1.weight', 'encoder.down_blocks.1.resnets.0.conv2.bias', 'encoder.down_blocks.1.resnets.0.conv2.weight', 'encoder.down_blocks.1.resnets.0.conv_shortcut.bias', 'encoder.down_blocks.1.resnets.0.conv_shortcut.weight', 'encoder.down_blocks.1.resnets.0.norm1.bias', 'encoder.down_blocks.1.resnets.0.norm1.weight', 'encoder.down_blocks.1.resnets.0.norm2.bias', 'encoder.down_blocks.1.resnets.0.norm2.weight', 'encoder.down_blocks.1.resnets.1.conv1.bias', 'encoder.down_blocks.1.resnets.1.conv1.weight', 'encoder.down_blocks.1.resnets.1.conv2.bias', 'encoder.down_blocks.1.resnets.1.conv2.weight', 'encoder.down_blocks.1.resnets.1.norm1.bias', 'encoder.down_blocks.1.resnets.1.norm1.weight', 'encoder.down_blocks.1.resnets.1.norm2.bias', 'encoder.down_blocks.1.resnets.1.norm2.weight', 'encoder.down_blocks.2.downsamplers.0.conv.bias', 'encoder.down_blocks.2.downsamplers.0.conv.weight', 'encoder.down_blocks.2.resnets.0.conv1.bias', 'encoder.down_blocks.2.resnets.0.conv1.weight', 'encoder.down_blocks.2.resnets.0.conv2.bias', 'encoder.down_blocks.2.resnets.0.conv2.weight', 'encoder.down_blocks.2.resnets.0.conv_shortcut.bias', 'encoder.down_blocks.2.resnets.0.conv_shortcut.weight', 'encoder.down_blocks.2.resnets.0.norm1.bias', 'encoder.down_blocks.2.resnets.0.norm1.weight', 'encoder.down_blocks.2.resnets.0.norm2.bias', 'encoder.down_blocks.2.resnets.0.norm2.weight', 'encoder.down_blocks.2.resnets.1.conv1.bias', 'encoder.down_blocks.2.resnets.1.conv1.weight', 'encoder.down_blocks.2.resnets.1.conv2.bias', 'encoder.down_blocks.2.resnets.1.conv2.weight', 'encoder.down_blocks.2.resnets.1.norm1.bias', 'encoder.down_blocks.2.resnets.1.norm1.weight', 'encoder.down_blocks.2.resnets.1.norm2.bias', 'encoder.down_blocks.2.resnets.1.norm2.weight', 'encoder.down_blocks.3.resnets.0.conv1.bias', 'encoder.down_blocks.3.resnets.0.conv1.weight', 'encoder.down_blocks.3.resnets.0.conv2.bias', 'encoder.down_blocks.3.resnets.0.conv2.weight', 'encoder.down_blocks.3.resnets.0.norm1.bias', 'encoder.down_blocks.3.resnets.0.norm1.weight', 'encoder.down_blocks.3.resnets.0.norm2.bias', 'encoder.down_blocks.3.resnets.0.norm2.weight', 'encoder.down_blocks.3.resnets.1.conv1.bias', 'encoder.down_blocks.3.resnets.1.conv1.weight', 'encoder.down_blocks.3.resnets.1.conv2.bias', 'encoder.down_blocks.3.resnets.1.conv2.weight', 'encoder.down_blocks.3.resnets.1.norm1.bias', 'encoder.down_blocks.3.resnets.1.norm1.weight', 'encoder.down_blocks.3.resnets.1.norm2.bias', 'encoder.down_blocks.3.resnets.1.norm2.weight', 'encoder.mid_block.attentions.0.group_norm.bias', 'encoder.mid_block.attentions.0.group_norm.weight', 'encoder.mid_block.attentions.0.to_k.bias', 'encoder.mid_block.attentions.0.to_k.weight', 'encoder.mid_block.attentions.0.to_out.0.bias', 'encoder.mid_block.attentions.0.to_out.0.weight', 'encoder.mid_block.attentions.0.to_q.bias', 'encoder.mid_block.attentions.0.to_q.weight', 'encoder.mid_block.attentions.0.to_v.bias', 'encoder.mid_block.attentions.0.to_v.weight', 'encoder.mid_block.resnets.0.conv1.bias', 'encoder.mid_block.resnets.0.conv1.weight', 'encoder.mid_block.resnets.0.conv2.bias', 'encoder.mid_block.resnets.0.conv2.weight', 'encoder.mid_block.resnets.0.norm1.bias', 'encoder.mid_block.resnets.0.norm1.weight', 'encoder.mid_block.resnets.0.norm2.bias', 'encoder.mid_block.resnets.0.norm2.weight', 'encoder.mid_block.resnets.1.conv1.bias', 'encoder.mid_block.resnets.1.conv1.weight', 'encoder.mid_block.resnets.1.conv2.bias', 'encoder.mid_block.resnets.1.conv2.weight', 'encoder.mid_block.resnets.1.norm1.bias', 'encoder.mid_block.resnets.1.norm1.weight', 'encoder.mid_block.resnets.1.norm2.bias', 'encoder.mid_block.resnets.1.norm2.weight', 'decoder.conv_norm_out.bias', 'decoder.conv_norm_out.weight', 'decoder.mid_block.attentions.0.group_norm.bias', 'decoder.mid_block.attentions.0.group_norm.weight', 'decoder.mid_block.attentions.0.to_k.bias', 'decoder.mid_block.attentions.0.to_k.weight', 'decoder.mid_block.attentions.0.to_out.0.bias', 'decoder.mid_block.attentions.0.to_out.0.weight', 'decoder.mid_block.attentions.0.to_q.bias', 'decoder.mid_block.attentions.0.to_q.weight', 'decoder.mid_block.attentions.0.to_v.bias', 'decoder.mid_block.attentions.0.to_v.weight', 'decoder.mid_block.resnets.0.conv1.bias', 'decoder.mid_block.resnets.0.conv1.weight', 'decoder.mid_block.resnets.0.conv2.bias', 'decoder.mid_block.resnets.0.conv2.weight', 'decoder.mid_block.resnets.0.norm1.bias', 'decoder.mid_block.resnets.0.norm1.weight', 'decoder.mid_block.resnets.0.norm2.bias', 'decoder.mid_block.resnets.0.norm2.weight', 'decoder.mid_block.resnets.1.conv1.bias', 'decoder.mid_block.resnets.1.conv1.weight', 'decoder.mid_block.resnets.1.conv2.bias', 'decoder.mid_block.resnets.1.conv2.weight', 'decoder.mid_block.resnets.1.norm1.bias', 'decoder.mid_block.resnets.1.norm1.weight', 'decoder.mid_block.resnets.1.norm2.bias', 'decoder.mid_block.resnets.1.norm2.weight', 'decoder.up_blocks.0.resnets.0.conv1.bias', 'decoder.up_blocks.0.resnets.0.conv1.weight', 'decoder.up_blocks.0.resnets.0.conv2.bias', 'decoder.up_blocks.0.resnets.0.conv2.weight', 'decoder.up_blocks.0.resnets.0.norm1.bias', 'decoder.up_blocks.0.resnets.0.norm1.weight', 'decoder.up_blocks.0.resnets.0.norm2.bias', 'decoder.up_blocks.0.resnets.0.norm2.weight', 'decoder.up_blocks.0.resnets.1.conv1.bias', 'decoder.up_blocks.0.resnets.1.conv1.weight', 'decoder.up_blocks.0.resnets.1.conv2.bias', 'decoder.up_blocks.0.resnets.1.conv2.weight', 'decoder.up_blocks.0.resnets.1.norm1.bias', 'decoder.up_blocks.0.resnets.1.norm1.weight', 'decoder.up_blocks.0.resnets.1.norm2.bias', 'decoder.up_blocks.0.resnets.1.norm2.weight', 'decoder.up_blocks.0.resnets.2.conv1.bias', 'decoder.up_blocks.0.resnets.2.conv1.weight', 'decoder.up_blocks.0.resnets.2.conv2.bias', 'decoder.up_blocks.0.resnets.2.conv2.weight', 'decoder.up_blocks.0.resnets.2.norm1.bias', 'decoder.up_blocks.0.resnets.2.norm1.weight', 'decoder.up_blocks.0.resnets.2.norm2.bias', 'decoder.up_blocks.0.resnets.2.norm2.weight', 'decoder.up_blocks.0.upsamplers.0.conv.bias', 'decoder.up_blocks.0.upsamplers.0.conv.weight', 'decoder.up_blocks.1.resnets.0.conv1.bias', 'decoder.up_blocks.1.resnets.0.conv1.weight', 'decoder.up_blocks.1.resnets.0.conv2.bias', 'decoder.up_blocks.1.resnets.0.conv2.weight', 'decoder.up_blocks.1.resnets.0.norm1.bias', 'decoder.up_blocks.1.resnets.0.norm1.weight', 'decoder.up_blocks.1.resnets.0.norm2.bias', 'decoder.up_blocks.1.resnets.0.norm2.weight', 'decoder.up_blocks.1.resnets.1.conv1.bias', 'decoder.up_blocks.1.resnets.1.conv1.weight', 'decoder.up_blocks.1.resnets.1.conv2.bias', 'decoder.up_blocks.1.resnets.1.conv2.weight', 'decoder.up_blocks.1.resnets.1.norm1.bias', 'decoder.up_blocks.1.resnets.1.norm1.weight', 'decoder.up_blocks.1.resnets.1.norm2.bias', 'decoder.up_blocks.1.resnets.1.norm2.weight', 'decoder.up_blocks.1.resnets.2.conv1.bias', 'decoder.up_blocks.1.resnets.2.conv1.weight', 'decoder.up_blocks.1.resnets.2.conv2.bias', 'decoder.up_blocks.1.resnets.2.conv2.weight', 'decoder.up_blocks.1.resnets.2.norm1.bias', 'decoder.up_blocks.1.resnets.2.norm1.weight', 'decoder.up_blocks.1.resnets.2.norm2.bias', 'decoder.up_blocks.1.resnets.2.norm2.weight', 'decoder.up_blocks.1.upsamplers.0.conv.bias', 'decoder.up_blocks.1.upsamplers.0.conv.weight', 'decoder.up_blocks.2.resnets.0.conv1.bias', 'decoder.up_blocks.2.resnets.0.conv1.weight', 'decoder.up_blocks.2.resnets.0.conv2.bias', 'decoder.up_blocks.2.resnets.0.conv2.weight', 'decoder.up_blocks.2.resnets.0.conv_shortcut.bias', 'decoder.up_blocks.2.resnets.0.conv_shortcut.weight', 'decoder.up_blocks.2.resnets.0.norm1.bias', 'decoder.up_blocks.2.resnets.0.norm1.weight', 'decoder.up_blocks.2.resnets.0.norm2.bias', 'decoder.up_blocks.2.resnets.0.norm2.weight', 'decoder.up_blocks.2.resnets.1.conv1.bias', 'decoder.up_blocks.2.resnets.1.conv1.weight', 'decoder.up_blocks.2.resnets.1.conv2.bias', 'decoder.up_blocks.2.resnets.1.conv2.weight', 'decoder.up_blocks.2.resnets.1.norm1.bias', 'decoder.up_blocks.2.resnets.1.norm1.weight', 'decoder.up_blocks.2.resnets.1.norm2.bias', 'decoder.up_blocks.2.resnets.1.norm2.weight', 'decoder.up_blocks.2.resnets.2.conv1.bias', 'decoder.up_blocks.2.resnets.2.conv1.weight', 'decoder.up_blocks.2.resnets.2.conv2.bias', 'decoder.up_blocks.2.resnets.2.conv2.weight', 'decoder.up_blocks.2.resnets.2.norm1.bias', 'decoder.up_blocks.2.resnets.2.norm1.weight', 'decoder.up_blocks.2.resnets.2.norm2.bias', 'decoder.up_blocks.2.resnets.2.norm2.weight', 'decoder.up_blocks.2.upsamplers.0.conv.bias', 'decoder.up_blocks.2.upsamplers.0.conv.weight', 'decoder.up_blocks.3.resnets.0.conv1.bias', 'decoder.up_blocks.3.resnets.0.conv1.weight', 'decoder.up_blocks.3.resnets.0.conv2.bias', 'decoder.up_blocks.3.resnets.0.conv2.weight', 'decoder.up_blocks.3.resnets.0.conv_shortcut.bias', 'decoder.up_blocks.3.resnets.0.conv_shortcut.weight', 'decoder.up_blocks.3.resnets.0.norm1.bias', 'decoder.up_blocks.3.resnets.0.norm1.weight', 'decoder.up_blocks.3.resnets.0.norm2.bias', 'decoder.up_blocks.3.resnets.0.norm2.weight', 'decoder.up_blocks.3.resnets.1.conv1.bias', 'decoder.up_blocks.3.resnets.1.conv1.weight', 'decoder.up_blocks.3.resnets.1.conv2.bias', 'decoder.up_blocks.3.resnets.1.conv2.weight', 'decoder.up_blocks.3.resnets.1.norm1.bias', 'decoder.up_blocks.3.resnets.1.norm1.weight', 'decoder.up_blocks.3.resnets.1.norm2.bias', 'decoder.up_blocks.3.resnets.1.norm2.weight', 'decoder.up_blocks.3.resnets.2.conv1.bias', 'decoder.up_blocks.3.resnets.2.conv1.weight', 'decoder.up_blocks.3.resnets.2.conv2.bias', 'decoder.up_blocks.3.resnets.2.conv2.weight', 'decoder.up_blocks.3.resnets.2.norm1.bias', 'decoder.up_blocks.3.resnets.2.norm1.weight', 'decoder.up_blocks.3.resnets.2.norm2.bias', 'decoder.up_blocks.3.resnets.2.norm2.weight']
K-Model Created: {'storage_dtype': torch.bfloat16, 'computation_dtype': torch.bfloat16}
Model loaded in 0.5s (unload existing model: 0.2s, forge model load: 0.4s).
[LORA] Loaded F:\IA\Packages\stable-diffusion-webui-forge\models\Lora\VirginiaCiucciFluxLora.safetensors for KModel-UNet with 494 keys at weight 1.0 (skipped 0 keys)
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
To load target model JointTextEncoder
Begin to load 1 model
[Unload] Trying to free 13464.34 MB for cuda:0 with 0 models keep loaded ...
[Memory Management] Current Free GPU Memory: 22980.48 MB
[Memory Management] Required Model Memory: 9569.49 MB
[Memory Management] Required Inference Memory: 1024.00 MB
[Memory Management] Estimated Remaining GPU Memory: 12386.99 MB
Moving model(s) has taken 3.13 seconds
Distilled CFG Scale: 3.5
To load target model KModel
Begin to load 1 model
[Unload] Trying to free 30800.42 MB for cuda:0 with 0 models keep loaded ...
[Unload] Current free memory is 13241.91 MB ... 
[Unload] Unload model JointTextEncoder
[Memory Management] Current Free GPU Memory: 22890.52 MB
[Memory Management] Required Model Memory: 22700.13 MB
[Memory Management] Required Inference Memory: 1024.00 MB
[Memory Management] Estimated Remaining GPU Memory: -833.62 MB
Patching LoRAs for KModel: 100%|██████████| 304/304 [00:21<00:00, 14.06it/s]
LoRA patching has taken 21.62 seconds
[Memory Management] Loaded to Shared Swap: 2142.51 MB (blocked method)
[Memory Management] Loaded to GPU: 20557.59 MB
Moving model(s) has taken 30.93 seconds
100%|██████████| 20/20 [00:15<00:00,  1.33it/s]
To load target model IntegratedAutoencoderKL
Begin to load 1 model
[Unload] Trying to free 4495.77 MB for cuda:0 with 0 models keep loaded ...
[Unload] Current free memory is 2048.40 MB ... 
[Unload] Unload model KModel
[Memory Management] Current Free GPU Memory: 22866.07 MB
[Memory Management] Required Model Memory: 159.87 MB
[Memory Management] Required Inference Memory: 1024.00 MB
[Memory Management] Estimated Remaining GPU Memory: 21682.19 MB
Moving model(s) has taken 4.43 seconds
Total progress: 100%|██████████| 20/20 [00:19<00:00,  1.05it/s]s]

https://github.com/user-attachments/assets/75029201-c461-43fa-8bb5-9937719e39b7

killerciao avatar Aug 18 '24 08:08 killerciao

same issue here!

queenofinvidia avatar Aug 18 '24 08:08 queenofinvidia

same here. Black blue or gray at last step.

protector131090 avatar Aug 18 '24 08:08 protector131090

use another vae

vixenius avatar Aug 18 '24 13:08 vixenius

I found the problem. I downloaded the Vae From Blackforestlabs inside VAE folder... But you need to download ae.safetensors file in the main folder instead... https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/ae.safetensors

killerciao avatar Aug 18 '24 17:08 killerciao