stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: image not generating

Open alexbespik opened this issue 1 year ago • 0 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 490, in sdp_attnblock_forward out = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=0.0, is_causal=False) RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead.

100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [01:23<00:00, 4.15s/it] Error completing request:50, 3.92s/it]
Arguments: ('task(cv0zpfy6gkj9f2x)', 'Cat', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {} Traceback (most recent call last): File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/call_queue.py", line 37, in f res = func(*args, **kwargs) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img processed = process_images(p) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 515, in process_images res = process_images_inner(p) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 671, in process_images_inner x_samples_ddim = [decode_first_stage(p.sd_model, samples_ddim[i:i+1].to(dtype=devices.dtype_vae))[0].cpu() for i in range(samples_ddim.size(0))] File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 671, in x_samples_ddim = [decode_first_stage(p.sd_model, samples_ddim[i:i+1].to(dtype=devices.dtype_vae))[0].cpu() for i in range(samples_ddim.size(0))] File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 444, in decode_first_stage x = model.decode_first_stage(x) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, **kwargs) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 826, in decode_first_stage return self.first_stage_model.decode(z) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/lowvram.py", line 52, in first_stage_model_decode_wrap return first_stage_model_decode(z) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py", line 90, in decode dec = self.decoder(z) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 631, in forward h = self.mid.attn_1(h) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 490, in sdp_attnblock_forward out = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=0.0, is_causal=False) RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead.

^CInterrupted with signal 2 in <frame at 0x55d78f94cfd0, file '/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/webui.py', line 266, code wait_on_server>    /run/me/a/e/Stable Diffusion/stable-diffusion-webui    master  prime-run sh webui.sh --medvram --xformers  ✔  4m 17s 

################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye) ################################################################

################################################################ Running on alexbespik user ################################################################

################################################################ Repo already cloned, using it as install directory ################################################################

################################################################ Create and activate python venv ################################################################

################################################################ Launching launch.py... ################################################################ Using TCMalloc: libtcmalloc.so.4 Python 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] Commit hash: 5ab7f213bec2f816f9c5644becb32eb72c8ffb89 Installing requirements Launching Web UI with arguments: --medvram --xformers Loading weights [3f8f827f79] from /run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/models/Stable-diffusion/amIReal_V2.safetensors Creating model from config: /run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/configs/v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Applying xformers cross attention optimization. Textual inversion embeddings loaded(0): Model loaded in 1.7s (load weights from disk: 0.2s, create model: 0.5s, apply weights to model: 0.7s, apply half(): 0.3s). Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 6.8s (import torch: 1.1s, import gradio: 1.0s, import ldm: 1.2s, other imports: 0.6s, setup codeformer: 0.2s, load scripts: 0.5s, load SD checkpoint: 1.8s, create ui: 0.3s). 0%| | 0/20 [00:15<?, ?it/s] Error completing request Arguments: ('task(h8cw636qoijzxyn)', 'Cat', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {} Traceback (most recent call last): File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/call_queue.py", line 37, in f res = func(*args, **kwargs) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img processed = process_images(p) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 515, in process_images res = process_images_inner(p) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 669, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 887, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 251, in launch_sampling return func() File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 167, in forward devices.test_for_nans(x_out, "unet") File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/devices.py", line 156, in test_for_nans raise NansException(message) modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Steps to reproduce the problem

generate image

What should have happened?

image generated

Commit where the problem happens

5ab7f213

What platforms do you use to access the UI ?

Linux

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--medvram  --xformers

List of extensions

image

Console logs

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on alexbespik user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Using TCMalloc: libtcmalloc.so.4
Python 3.10.10 (main, Mar  5 2023, 22:26:53) [GCC 12.2.1 20230201]
Commit hash: 5ab7f213bec2f816f9c5644becb32eb72c8ffb89
Installing requirements
Launching Web UI with arguments: --medvram --xformers
Loading weights [3f8f827f79] from /run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/models/Stable-diffusion/amIReal_V2.safetensors
Creating model from config: /run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0): 
Model loaded in 1.7s (load weights from disk: 0.2s, create model: 0.5s, apply weights to model: 0.7s, apply half(): 0.3s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 6.8s (import torch: 1.1s, import gradio: 1.0s, import ldm: 1.2s, other imports: 0.6s, setup codeformer: 0.2s, load scripts: 0.5s, load SD checkpoint: 1.8s, create ui: 0.3s).
  0%|                                                                                                                                                                                                               | 0/20 [00:15<?, ?it/s]
Error completing request
Arguments: ('task(h8cw636qoijzxyn)', 'Cat', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 515, in process_images
    res = process_images_inner(p)
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 669, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/processing.py", line 887, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 251, in launch_sampling
    return func()
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 167, in forward
    devices.test_for_nans(x_out, "unet")
  File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/Stable Diffusion/stable-diffusion-webui/modules/devices.py", line 156, in test_for_nans
    raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Additional information

image

alexbespik avatar May 08 '23 10:05 alexbespik