stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Bug]: Custom Model outputs black screen
Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [x] The issue has been reported before but has not been fixed yet
What happened?
When I generated the image using a fine tuned SD_v1.5 model trained with everydream 2.0, it appeared to work at first in the preview window. It is clear that it is generating an image. The problem is when it gets to the very end, the image gets set to solid black.
Steps to reproduce the problem
- Create a pod on runpod with the ashleykza/forge:2.1.0 template
- Download custom SD 1.5 model generated with everydream 2.0 into models/Stable-diffusion
- Generate an image.
What should have happened?
The image generates normally.
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
Runpod OS: Ubuntu Server 22.04 LTS GPU: Tesla V100 CPU: 6 vCPU RAM: 47 GB
Console logs
################################################################
[1m[32mInstall script for stable-diffusion + Web UI
[1m[34mTested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.[0m
################################################################
################################################################
Running on [1m[32mroot[0m user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
python venv already activate or run without venv: /workspace/venvs/stable-diffusion-webui-forge
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Installing sd-forge-controlnet requirement: fvcore
Installing sd-forge-controlnet requirement: mediapipe
Installing sd-forge-controlnet requirement: onnxruntime
Installing sd-forge-controlnet requirement: svglib
Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
Installing forge_legacy_preprocessor requirement: handrefinerportable
Launching Web UI with arguments: -f --port 3001 --listen --api --xformers --enable-insecure-extension-access --no-half-vae
Total VRAM 16151 MB, total RAM 386601 MB
xformers version: 0.0.23.post1+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 Tesla V100-PCIE-16GB : native
VAE dtype: torch.float32
CUDA Stream Activated: False
Using xformers cross attention
ControlNet preprocessor location: /workspace/stable-diffusion-webui-forge/models/ControlNetPreprocessor
Checkpoint realisticVisionV51_v51VAE.safetensors [15012c538f] not found; loading fallback realisticVisionV51_v51VAE.safetensors
Calculating sha256 for /workspace/stable-diffusion-webui-forge/models/Stable-diffusion/realisticVisionV51_v51VAE.safetensors: 2024-04-17 15:46:55,120 - ControlNet - [0;32mINFO[0m - ControlNet UI callback registered.
Running on local URL: http://0.0.0.0:3001
To create a public link, set `share=True` in `launch()`.
Startup time: 63.4s (prepare environment: 51.4s, import torch: 5.6s, import gradio: 0.7s, setup paths: 0.8s, initialize shared: 0.1s, other imports: 0.9s, load scripts: 2.1s, create ui: 0.7s, gradio launch: 0.4s, add APIs: 0.6s).
15012c538f503ce2ebfc2c8547b268c75ccdaff7a281db55399940ff1d70e21d
Loading weights [15012c538f] from /workspace/stable-diffusion-webui-forge/models/Stable-diffusion/realisticVisionV51_v51VAE.safetensors
model_type EPS
UNet ADM Dimension 0
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 15841.0615234375
[Memory Management] Model Memory (MB) = 454.2076225280762
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 14362.853900909424
Moving model(s) has taken 0.08 seconds
Model loaded in 12.8s (calculate hash: 7.3s, forge load real models: 5.2s, calculate empty prompt: 0.2s).
Calculating sha256 for /workspace/stable-diffusion-webui-forge/models/Stable-diffusion/merged_v0.1.ckpt: 5822de351d0256a0d610182777355c7b496775454685f5cb52121e52971d1032
Loading weights [5822de351d] from /workspace/stable-diffusion-webui-forge/models/Stable-diffusion/merged_v0.1.ckpt
model_type EPS
UNet ADM Dimension 0
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
Missing VAE keys ['encoder.mid.attn_1.q.weight', 'encoder.mid.attn_1.q.bias', 'encoder.mid.attn_1.k.weight', 'encoder.mid.attn_1.k.bias', 'encoder.mid.attn_1.v.weight', 'encoder.mid.attn_1.v.bias', 'encoder.mid.attn_1.proj_out.weight', 'encoder.mid.attn_1.proj_out.bias', 'decoder.mid.attn_1.q.weight', 'decoder.mid.attn_1.q.bias', 'decoder.mid.attn_1.k.weight', 'decoder.mid.attn_1.k.bias', 'decoder.mid.attn_1.v.weight', 'decoder.mid.attn_1.v.bias', 'decoder.mid.attn_1.proj_out.weight', 'decoder.mid.attn_1.proj_out.bias']
Leftover VAE keys ['encoder.mid.attn_1.to_q.weight', 'encoder.mid.attn_1.to_q.bias', 'encoder.mid.attn_1.to_k.weight', 'encoder.mid.attn_1.to_k.bias', 'encoder.mid.attn_1.to_v.weight', 'encoder.mid.attn_1.to_v.bias', 'encoder.mid.attn_1.to_out.0.weight', 'encoder.mid.attn_1.to_out.0.bias', 'decoder.mid.attn_1.to_q.weight', 'decoder.mid.attn_1.to_q.bias', 'decoder.mid.attn_1.to_k.weight', 'decoder.mid.attn_1.to_k.bias', 'decoder.mid.attn_1.to_v.weight', 'decoder.mid.attn_1.to_v.bias', 'decoder.mid.attn_1.to_out.0.weight', 'decoder.mid.attn_1.to_out.0.bias']
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 15764.7109375
[Memory Management] Model Memory (MB) = 454.2076225280762
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 14286.503314971924
Moving model(s) has taken 0.09 seconds
Model loaded in 10.0s (unload existing model: 0.4s, calculate hash: 5.7s, load weights from disk: 1.4s, forge load real models: 2.4s, calculate empty prompt: 0.1s).
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 15447.5341796875
[Memory Management] Model Memory (MB) = 1639.4137649536133
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 12784.120414733887
Moving model(s) has taken 0.87 seconds
0%| | 0/20 [00:00<?, ?it/s]
Total progress: 0%| | 0/20 [00:00<?, ?it/s][A
5%|▌ | 1/20 [00:00<00:06, 2.95it/s]
Total progress: 15%|█▌ | 3/20 [00:00<00:00, 26.33it/s][A
15%|█▌ | 3/20 [00:00<00:02, 7.43it/s]
25%|██▌ | 5/20 [00:00<00:01, 10.50it/s]
Total progress: 30%|███ | 6/20 [00:00<00:00, 19.83it/s][A
35%|███▌ | 7/20 [00:00<00:01, 12.66it/s]
Total progress: 45%|████▌ | 9/20 [00:00<00:00, 18.81it/s][A
45%|████▌ | 9/20 [00:00<00:00, 14.17it/s]
Total progress: 55%|█████▌ | 11/20 [00:00<00:00, 17.63it/s][A
55%|█████▌ | 11/20 [00:00<00:00, 14.66it/s]
Total progress: 65%|██████▌ | 13/20 [00:00<00:00, 17.33it/s][A
65%|██████▌ | 13/20 [00:01<00:00, 15.28it/s]
Total progress: 75%|███████▌ | 15/20 [00:00<00:00, 17.56it/s][A
75%|███████▌ | 15/20 [00:01<00:00, 16.00it/s]
Total progress: 85%|████████▌ | 17/20 [00:00<00:00, 17.63it/s][A
85%|████████▌ | 17/20 [00:01<00:00, 16.53it/s]
Total progress: 95%|█████████▌| 19/20 [00:01<00:00, 17.70it/s][A
95%|█████████▌| 19/20 [00:01<00:00, 16.88it/s]
100%|██████████| 20/20 [00:01<00:00, 13.80it/s]
To load target model AutoencoderKL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 13781.10986328125
[Memory Management] Model Memory (MB) = 319.11416244506836
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 12437.995700836182
Moving model(s) has taken 0.15 seconds
/workspace/stable-diffusion-webui-forge/modules/processing.py:968: RuntimeWarning: invalid value encountered in cast
x_sample = x_sample.astype(np.uint8)
Total progress: 100%|██████████| 20/20 [00:01<00:00, 17.70it/s][A
Total progress: 100%|██████████| 20/20 [00:01<00:00, 13.95it/s]
Additional information
I got it to work once, but now I can't reproduce it.