stable-diffusion-webui-forge icon indicating copy to clipboard operation
stable-diffusion-webui-forge copied to clipboard

[HELP]:Only black images are generated.

Open Hollyta opened this issue 1 year ago • 30 comments

hello everyone Please help me I'm a beginner.  Please let me know if any files are missing. I'll add them.

Checklist

  • [x] The issue exists after disabling all extensions
  • [x] The issue exists on a clean installation of webui
  • [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [x] The issue exists in the current version of the webui
  • [x] The issue has not been reported before recently
  • [ ] The issue has been reported before but has not been fixed yet

What happened? Only a pitch-black image is generated. Settings→VAE→VAE type for decoding→TAESD generates image but resolution is reduced And the A1111 does not cause the same problem.

spec 4070TI

Version version: [f2.0.1v1.10.1-previous-518-gc3366a76] python: 3.10.6 torch: 2.3.1+cu121 xformers: 0.0.27 gradio: 4.40.0 checkpoint: [196f87e50e]

What do you want to be? Images are generated normally.

Console logs `venv "D:\stable-diffusion-webui-forge\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f2.0.1v1.10.1-previous-518-gc3366a76 Commit hash: c3366a7689427751d08e4ee30842bde4c9a83ce6 Installing xformers Launching Web UI with arguments: --xformers --ckpt-dir D:/stable-diffusion-webui/models/Stable-diffusion --hypernetwork-dir D:/stable-diffusion-webui/models/hypernetworks --embeddings-dir D:/stable-diffusion-webui/embeddings --lora-dir D:/stable-diffusion-webui/models/Lora --vae-dir D:/stable-diffusion-webui/models/VAE Total VRAM 12282 MB, total RAM 65246 MB pytorch version: 2.3.1+cu121 WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled Traceback (most recent call last): File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\xformers_init_.py", line 57, in _is_triton_available import triton # noqa ModuleNotFoundError: No module named 'triton' xformers version: 0.0.27 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : native Hint: your device supports --cuda-malloc for potential speed improvements. VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16 CUDA Using Stream: False Using xformers cross attention Using xformers attention for VAE ControlNet preprocessor location: D:\stable-diffusion-webui-forge\models\ControlNetPreprocessor 2024-09-08 12:06:24,821 - ControlNet - INFO - ControlNet UI callback registered. Model selected: {'checkpoint_info': {'filename': 'D:\stable-diffusion-webui\models\Stable-diffusion\matrixHentaiPony_v160b.safetensors', 'hash': 'eebca96d'}, 'additional_modules': ['D:\stable-diffusion-webui\models\VAE\sdxl.vae.safetensors'], 'unet_storage_dtype': None} Using online LoRAs in FP16: False Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 23.2s (prepare environment: 4.9s, import torch: 14.8s, other imports: 0.3s, load scripts: 1.0s, create ui: 1.5s, gradio launch: 0.7s). Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False} [GPU Setting] You will use 91.66% GPU memory (11257.00 MB) to load weights, and use 8.34% GPU memory (1024.00 MB) to do matrix computation. Loading Model: {'checkpoint_info': {'filename': 'D:\stable-diffusion-webui\models\Stable-diffusion\matrixHentaiPony_v160b.safetensors', 'hash': 'eebca96d'}, 'additional_modules': ['D:\stable-diffusion-webui\models\VAE\sdxl.vae.safetensors'], 'unet_storage_dtype': None} [Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done. StateDict Keys: {'unet': 1680, 'vae': 250, 'text_encoder': 197, 'text_encoder_2': 518, 'ignore': 0} Working with z of shape (1, 4, 32, 32) = 4096 dimensions. IntegratedAutoencoderKL Unexpected: ['model_ema.decay', 'model_ema.num_updates'] K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float16} Model loaded in 0.7s (unload existing model: 0.1s, forge model load: 0.5s). [Unload] Trying to free 3051.58 MB for cuda:0 with 0 models keep loaded ... Done. [Memory Management] Target: JointTextEncoder, Free GPU: 11025.90 MB, Model Require: 1559.68 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 8442.22 MB, All loaded to GPU. Moving model(s) has taken 0.55 seconds [Unload] Trying to free 1024.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 9216.19 MB ... Done. [Unload] Trying to free 7656.40 MB for cuda:0 with 0 models keep loaded ... Current free memory is 9215.34 MB ... Done. [Memory Management] Target: KModel, Free GPU: 9215.34 MB, Model Require: 4897.05 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 3294.29 MB, All loaded to GPU. Moving model(s) has taken 1.80 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 4.00it/s] [Unload] Trying to free 4495.36 MB for cuda:0 with 0 models keep loaded ... Current free memory is 4178.18 MB ... Unload model JointTextEncoder Current free memory is 5938.54 MB ... Done. [Memory Management] Target: IntegratedAutoencoderKL, Free GPU: 5938.54 MB, Model Require: 159.56 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 4754.98 MB, All loaded to GPU. Moving model(s) has taken 0.39 seconds D:\stable-diffusion-webui-forge\modules\processing.py:1010: RuntimeWarning: invalid value encountered in cast x_sample = x_sample.astype(np.uint8) Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.74it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 4.22it/s] `

Hollyta avatar Sep 08 '24 03:09 Hollyta

尝试添加--no-half CMD标签?

s4130 avatar Sep 08 '24 05:09 s4130

What sampling method are you using? I have the same issue with "[Forge] Flux Realistic"

erik-trifonov avatar Sep 08 '24 07:09 erik-trifonov

I have the same problem. fresh install. black image.

kiri8969 avatar Sep 08 '24 22:09 kiri8969

尝试添加--no-half CMD标签?

The webui-user.bat

`@echo off

set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--xformers --no-half

@REM Uncomment following code to reference an existing A1111 checkout. set A1111_HOME= D:/stable-diffusion-webui @REM @REM set VENV_DIR=%A1111_HOME%/venv set COMMANDLINE_ARGS=%COMMANDLINE_ARGS% ^ --ckpt-dir %A1111_HOME%/models/Stable-diffusion ^ --hypernetwork-dir %A1111_HOME%/models/hypernetworks ^ --embeddings-dir %A1111_HOME%/embeddings ^ --lora-dir %A1111_HOME%/models/Lora ^ --vae-dir %A1111_HOME%/models/VAE

call webui.bat `

It didn't work.

Hollyta avatar Sep 08 '24 23:09 Hollyta

What sampling method are you using? I have the same issue with "[Forge] Flux Realistic"

DPM++ 2M SDE Karras

Hollyta avatar Sep 08 '24 23:09 Hollyta

GPU drivers -> Tried updating/downgrading PC initialization -> Tried GPU benchmark -> high score memtest86 -> already done

Is there anything else we can try?

Hollyta avatar Sep 13 '24 08:09 Hollyta

I have the same issue, after i upgrade to the version where you can select flux, I am on a mac m1, it gives some time back pictures, it has to do with this new version I have no issue with version: [f0.0.17v1.8.0rc-latest-276-g29be1da7]

Chris

calberts avatar Sep 13 '24 09:09 calberts

I found a workaround for the black image issue:

  • First, generate an image with the default Scale value (5 in SDXL)
  • For subsequent image generations, you can freely adjust the Scale value as desired

Once the first image is generated successfully with the default Scale, the system becomes more stable, allowing you to modify the Scale setting without encountering the black image problem.

hqsprn63 avatar Sep 13 '24 18:09 hqsprn63

I found a workaround for the black image issue:

  • First, generate an image with the default Scale value (5 in SDXL)
  • For subsequent image generations, you can freely adjust the Scale value as desired

Once the first image is generated successfully with the default Scale, the system becomes more stable, allowing you to modify the Scale setting without encountering the black image problem.

By “Scale” you mean CFG Scale, right? I tried that, but it didn't work.

Hollyta avatar Sep 14 '24 00:09 Hollyta

I have a similar problem. It used to work perfectly before (I would the problem has started happening after the flux updates). I am on macos. I've tried with clean installations (Stability Matrix & vanilla) but it didn't fix.

The weird part is, everything is not black images. Like if I generate a batch of 9, few will be generated :

grid-0001

Everything is working well with auto1111

YofarDev avatar Sep 23 '24 16:09 YofarDev

Same issue but only with sdxl, sd1.5 works just fine. I have tried different samplers but the issue persists. Please help.

It's also giving this weird waring--- RuntimeWarning: invalid value encountered in cast x_sample = x_sample.astype(np.uint8)

Elura21 avatar Oct 19 '24 11:10 Elura21

For me it happens if I use more than 15 Sampling Steps. Below that it works all the time, If I increase it to 16 I get black images.

JonatasBarros avatar Oct 31 '24 19:10 JonatasBarros

The issue still occurs on a clean install. Does that mean the issue occurs with the default GPU weight and also other checkpoints?

jswag245 avatar Nov 07 '24 16:11 jswag245

I'm having the same issue on MPS / macOS, M3 Pro 36 GB, with SDXL. Above a certain number of steps, black images will always be produced. Also getting the x_sample.astype(np.uint8) warning. Upgrading to PyTorch Nightly has no effect.

n0kovo avatar Nov 10 '24 11:11 n0kovo

I've never got Forge to create one image on a Mac (Apple M2 Ultra, Sonoma, 128gb memory). I've used Automatic 1111 for years. Nothing I've tried can get Forge to create anything other than blank/black images. I'm using the exact same models in A1111. So far I've only tried SDXL, but would like to use Flux. I can't see any reason to spend more time with Forge, unless someone can suggest how to get it working.

reekes avatar Dec 01 '24 22:12 reekes

exact same issue here. black images only with SDXL. A111 is working perfectly, just gonna stick with that for now. Wanted to try Flux tho

devalladares avatar Dec 29 '24 00:12 devalladares

Facing same issue still any workaround ?

@devalladares do you find anything i want to use flux as well @reekes

saurabhthesuperhero avatar Jan 25 '25 19:01 saurabhthesuperhero

@devalladares do you find anything i want to use flux as well @reekes

I cleaned out everything, then did a full re-install. From there I was able to link Forge to use my existing SD Web UI (Automatic1111). I got Forge to run and create images just like the Automatic1111, but it wasn't faster. I think it was actually slower. Maybe there are even hidden options not documented on how to speed it up, but I doubt it. So, I gave up on Forge. It took me months to tune my Automatic1111 install and get it working well on a Mac.

I haven't tried running with Flux because Automatic1111 doesn't support it, and switching to Forge just for Flux isn't interesting for me. Maybe someday. I'll also say that I doubt Automatic1111 will ever support Flux, so the only option is to use Forge.

I'm sure Forge supports Flux, but I don't want to spend the time getting it to work. I've already found getting SD working well on a Mac is super time-consuming and lots of incompatibilities. It's barely documented, so learning how to get SD on a Mac is expensive. I'm not sure I'll ever switch to Forge since it doesn't seem to be focused on supporting Macs.

reekes avatar Jan 25 '25 21:01 reekes

@reekes yes for me auomatic1111 wokred better, but for me now forge worked, but many times it generated black images Action :-> just keep sample steps to more than 32 etc, or less than 15. Just do like keep a 14 run it, and then keep at 33 etc it should work.

So flux will work because sample steps of max 4 is enough

and flux i tried last night, i kept steps 1-4 ,

Do try Flux it should work then.

for most of them my m4 24gb ram, hanged alot, for 1 step it was good but still 2 minutes.. so i am gonna give up on flux for now will wait if its get faster, because not worth my time.

for flux i will suggest if ur not interested in flux now like me, i will suggest lets stick to automatic1111 hope it gets any update.

saurabhthesuperhero avatar Jan 26 '25 17:01 saurabhthesuperhero

@reekes

Image

Sorry man but are you really this ?? I mean i got my first mac on launch date and i am listening that sound and you are the creator and quicktime too. feels like talking to celebrity.

Anyway thanks.

saurabhthesuperhero avatar Jan 26 '25 17:01 saurabhthesuperhero

Good news, I was able to successfully generate images using this method.

EDIT: i was solved this problem for the gtx 1060 using this command line arguments --all-in-fp16 --vae-in-fp32

https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1416#issue-2481511459

Hollyta avatar Jan 26 '25 17:01 Hollyta

Sorry man but are you really this ??

Sorry? Yeah, that's me. How many Jim Reekes can there be?

reekes avatar Jan 26 '25 20:01 reekes

@reekes Oh Okay nice to meet you.

saurabhthesuperhero avatar Jan 27 '25 10:01 saurabhthesuperhero

I am not using forge, I am using Swarm UI with ComfyUI backend, but I was having this issue on Apple M3, Sequoia 15.3 (24D60) after doing all the typical stuff to solve these issues:

  • brew update
  • brew upgrade (just to get anything obvious out of the way)
  • pip install --upgrade transformers diffusers accelerate pillow opencv-python scipy tqdm compel onnxruntime omegaconf einops safetensors flask uvicorn websockets aiohttp
  • pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
  • Updating Swarm via git pull, etc.

The solution for me was in Server > Backends, finding my ComfyUI Self-Starting backend and adding the following in ExtraArgs:

--listen --use-split-cross-attention

Whatever the equivalent command is in Forge (I'm sure it exists), add those arguments (or try Swarm).

Shanesan avatar Jan 27 '25 21:01 Shanesan

Hey ! Same Issue here I'm on a M3-Max Apple chip with 48 Go of Ram/Vram

I tested some things : Up to 19 samples it works perfectly. But when I retry with something more than 19 samples, I always get a black image btw hires-fix always work as intended : normal image -> better looking image | black image -> better looking black image

I tested the same installation on a windows computer with 64 go of ram a RTX 4070 (12 go Vram) I never got a black image at all

ShadowGcraft avatar Feb 07 '25 00:02 ShadowGcraft

same issue here - only with FLUX: yesterday (first install of forge) i was able to generate plenty images. Today first i tried the same as yesterday (using png info). Fix: i fixed it by taking a keen look at the model (flux) and the ui setting. normally if Flux chosen negative prompt is disabled but it was enabled. What i did - reload (ctrl-f5) chose ui for flux and flux model. now it works.

Problem: if i load from png info to txt2img it does not always correctly set model. in my case it stick to flux but everything else was XL - yielding black images.

dermoritz avatar Mar 17 '25 07:03 dermoritz

I encountered similiar issue on M4 with SDLX. I can generate only one normal image then all black images everytime I load a new model. For now I temporarily force reload the model everytime I hit "Generate" button by commenting 476-477 lines in sd_models.py. I suppose this is the bug of forge.

1057237562 avatar May 26 '25 14:05 1057237562

@1057237562 cant believe this bug is still there 4montgs ago i tried and deleted forge in my m4

saurabhthesuperhero avatar May 31 '25 12:05 saurabhthesuperhero

Omg. Bug still here. I use forge in Win and rtx 3050 4g Vram. Both Sd, Xl or flux can create image at first time. The second will be black image. It is unstable. I had use all way but it did't fix. Anything else?

xhuyvn avatar Oct 29 '25 23:10 xhuyvn

I had this issue for a while (around Feb 2025). I did a whole load of things, including: adding "--no-half --precision full" for the launch argument; rolling back Nvidia drivers; completely wiping my setup on ForgeUI; reinstalling Windows; completely underclocking and undervolting my GPU - the conclusion being that my GPU hardware is degrading. I'm getting TDR crashes and some games are completely unplayable.

I've put "set CUDA_LAUNCH_BLOCKING=1" line into my "webui-user.bat" file in the ForgeUI directory and it seems to have stopped the crashing for the time being as well as the black images (although adetailer seems to get black generations every now and again).

If you're having this issue, it may be hardware related.

I have a RTX 3080ti Laptop GPU if anyone was wondering.

lemonsareamazing avatar Nov 05 '25 19:11 lemonsareamazing