Fooocus icon indicating copy to clipboard operation
Fooocus copied to clipboard

entirely black image generated

Open DennisPeeters opened this issue 1 year ago • 4 comments

image

Windows 10 installation RTX 3060 laptop driver 531.79 Tried different models

All images come out black, not sure how to find a solution here

DennisPeeters avatar Dec 04 '23 22:12 DennisPeeters

I encounter the same issue, I ran fooocus on MacOS

herohung093 avatar Dec 05 '23 13:12 herohung093

same issue, also on mac

mlison avatar Dec 08 '23 10:12 mlison

Same issue on Mac, i58400 RX6600XT

marianoarga avatar Dec 17 '23 21:12 marianoarga

same issue on mac pro m1

so-joinplank avatar Dec 24 '23 13:12 so-joinplank

The issue has been addressed. Please make sure to use the latest version of Fooocus. You can find more information about this here:

  • https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12019
  • https://github.com/vladmandic/automatic/issues/1858
  • https://huggingface.co/stabilityai/stable-diffusion-2-1/discussions/9

mashb1t avatar Dec 30 '23 14:12 mashb1t

The issue persist, using latest main branch code

marianoarga avatar Jan 04 '24 16:01 marianoarga

Can you please share

  • the model you were using
  • the arguments for entry_with_update.py

with us? This allows us to debug if it's an fp16 sai issue or something with Fooocus.

mashb1t avatar Jan 04 '24 16:01 mashb1t

Sure thing:

main branch, up to date MacOs Monterrey, 32GB RAM, i58400 RX6600 XT 8GB

  • juggernautXL_version6Rundiffusion.safetensors
  • --preset realistic

marianoarga avatar Jan 04 '24 16:01 marianoarga

Being run with --preset realistic changes the model to the "realisticStockPhoto_v10", so I tested both

marianoarga avatar Jan 04 '24 16:01 marianoarga

The log output so far, already turned black the image (was initially entropy), it will remain black until the end of the process:

entry_with_update.py --preset realistic Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py', '--preset', 'realistic'] Loaded preset: /Users/mariano/development/AI/Fooocus/presets/realistic.json Failed to load config key: {"path_checkpoints": "/Users/mariano/development/Fooocus/models/checkpoints"} is invalid or does not exist; will use {"path_checkpoints": "../models/checkpoints/"} instead. Failed to load config key: {"path_loras": "/Users/mariano/development/Fooocus/models/loras"} is invalid or does not exist; will use {"path_loras": "../models/loras/"} instead. Failed to load config key: {"path_embeddings": "/Users/mariano/development/Fooocus/models/embeddings"} is invalid or does not exist; will use {"path_embeddings": "../models/embeddings/"} instead. Failed to load config key: {"path_vae_approx": "/Users/mariano/development/Fooocus/models/vae_approx"} is invalid or does not exist; will use {"path_vae_approx": "../models/vae_approx/"} instead. Failed to load config key: {"path_upscale_models": "/Users/mariano/development/Fooocus/models/upscale_models"} is invalid or does not exist; will use {"path_upscale_models": "../models/upscale_models/"} instead. Failed to load config key: {"path_inpaint": "/Users/mariano/development/Fooocus/models/inpaint"} is invalid or does not exist; will use {"path_inpaint": "../models/inpaint/"} instead. Failed to load config key: {"path_controlnet": "/Users/mariano/development/Fooocus/models/controlnet"} is invalid or does not exist; will use {"path_controlnet": "../models/controlnet/"} instead. Failed to load config key: {"path_clip_vision": "/Users/mariano/development/Fooocus/models/clip_vision"} is invalid or does not exist; will use {"path_clip_vision": "../models/clip_vision/"} instead. Failed to load config key: {"path_fooocus_expansion": "/Users/mariano/development/Fooocus/models/prompt_expansion/fooocus_expansion"} is invalid or does not exist; will use {"path_fooocus_expansion": "../models/prompt_expansion/fooocus_expansion"} instead. Failed to load config key: {"path_outputs": "/Users/mariano/development/Fooocus/outputs"} is invalid or does not exist; will use {"path_outputs": "../outputs/"} instead. Python 3.10.13 (main, Nov 1 2023, 16:44:37) [Clang 14.0.0 (clang-1400.0.29.202)] Fooocus version: 2.1.860 Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch(). Total VRAM 32768 MB, total RAM 32768 MB Set vram state to: SHARED Always offload VRAM Device: mps VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} Base model loaded: /Users/mariano/development/AI/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors Request to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/Users/mariano/development/AI/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors]. Loaded LoRA [/Users/mariano/development/AI/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [/Users/mariano/development/AI/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 788 keys at weight 0.25. Loaded LoRA [/Users/mariano/development/AI/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [/Users/mariano/development/AI/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 264 keys at weight 0.25. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cpu, use_fp16 = False. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 4.05 seconds App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 3.0 [Parameters] Seed = 6473121032613553998 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 15 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} Base model loaded: /Users/mariano/development/AI/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors Request to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/Users/mariano/development/AI/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors]. Loaded LoRA [/Users/mariano/development/AI/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [/Users/mariano/development/AI/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.25. Loaded LoRA [/Users/mariano/development/AI/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [/Users/mariano/development/AI/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 264 keys at weight 0.25. Requested to load SDXLClipModel Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 3.04 seconds [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... [Prompt Expansion] a cute dog, vivid colors, elegant, highly detailed, sharp focus, intricate, innocent, fine aesthetic, colorful, magical, mystical, winning, deep background, professional, cinematic, ambient, artistic, sublime, extremely inspirational, composed, beautiful, dramatic, thought, epic, stunning, light, pristine, magic, pure, full, strong, creative, loving, amazing [Fooocus] Encoding positive #1 ... [Fooocus] Encoding negative #1 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (1152, 896) Preparation time: 53.99 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 12.86 seconds /Users/mariano/development/AI/Fooocus/ldm_patched/k_diffusion/sampling.py:699: UserWarning: MPS: nonzero op is supported natively starting from macOS 13.0. Falling back on CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/Indexing.mm:283.) sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max() huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using tokenizers before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) 0%| | 0/30 [00:00<?, ?it/s]/usr/local/lib/python3.10/site-packages/torch/nn/functional.py:3983: UserWarning: MPS: 'nearest' mode upsampling is supported natively starting from macOS 13.0. Falling back on CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/UpSample.mm:255.) return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) /Users/mariano/development/AI/Fooocus/modules/anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.) s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True) 27%|████████████████████████████████████████▊ | 8/30 [02:26<06:00, 16.40s/it]/Users/mariano/development/AI/Fooocus/modules/core.py:260: RuntimeWarning: invalid value encountered in cast x_sample = x_sample.cpu().numpy().clip(0, 255).astype(np.uint8) 43%|█████████████████████████████████████████████████████████████████▊

marianoarga avatar Jan 04 '24 16:01 marianoarga

same issue on mac pro m1

@so-joinplank @marianoarga i just tested on a M1 Macbook Pro, works without issues in conda environment as described in https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#mac. Please make sure you have followed all steps of the instruction and please confirm this also happens on a fresh installation.

You may use --disable-offload-from-vram to boost speed though between generations.

mashb1t avatar Jan 04 '24 21:01 mashb1t

The issue persist, may be related to my i5 mac, thank you for your time, will test on other devices

marianoarga avatar Jan 04 '24 23:01 marianoarga

@mashb1t I reinstalled with the instructions from here: https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#mac and still getting a "grey" image now. The GPU is doing some kind of processing but not getting any image out. FYI: DiffusionBee works with my setup, but cannot load models into it.

Screen Shot 2024-01-10 at 19 22 36 ![Screen Shot 2024-01-10 at 19 22 27](https://github.com/lllyasviel/Fooocus/assets/3533839/15ae5471-edd7-451b-98b4-94894952f485)

Screen Shot 2024-01-10 at 19 22 27

kamil6x avatar Jan 10 '24 19:01 kamil6x

You might try to use --always-cpu and check if inference in general works, but most likely the combination of MacOS with Radeon Graphics doesn't work and Fooocus doesn't offer official support for this.

Mac is not intensively tested. Below is an unofficial guideline for using Mac. You can discuss problems https://github.com/lllyasviel/Fooocus/pull/129.

Maybe there already is a solution for you in https://github.com/lllyasviel/Fooocus/pull/129, please try your luck there and feel free to reference this issue there. I'm so sorry!

mashb1t avatar Jan 10 '24 19:01 mashb1t

Thanks for the response. Switched to something else as macOS is not really supported.

kamil6x avatar Mar 07 '24 21:03 kamil6x

The issue has been addressed. Please make sure to use the latest version of Fooocus. You can find more information about this here:

How do I check the fooocus version and update?

Bonghui avatar Jun 05 '24 02:06 Bonghui

@Bonghui Fooocus tries to auto-update on startup. You can find the version either in the console log on startup, in the browser tab title or in in the file fooocus_version.txt, where the current version 2.4.1 should be shown.

mashb1t avatar Jun 05 '24 06:06 mashb1t