stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Open PhotiniDev opened this issue 1 year ago • 44 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

Normally A1111 features work fine with SDXL Base and SDXL Refiner. But on 3 occasions over par 4-6 weeks I have had this same bug, I've tried all suggestions and A1111 troubleshoot page with no success. The only way I have successfully fixed it is with re-install from scratch. I run SDXL Base txt2img, works fine. Then I run SDXL Refiner img2img and receive the error regardless if I use "send to img2img" or "Batch img2img"

Error Message: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Screenshot (4245)

Steps to reproduce the problem

  1. Go to .... img2img
  2. Press .... Generate
  3. ... Receive Error Message Screenshot (4245)

What should have happened?

Normally when working, it will batch refine and generate all the images from the input directory into the output directory

Sysinfo

sysinfo-2023-08-31-18-35.txt

What browsers do you use to access the UI ?

Google Chrome

Console logs

venv "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Scripts\Python.exe"
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.1
Commit hash: <none>
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [7440042bbd] from C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\models\Stable-diffusion\sd_xl_refiner_1.0.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 14.7s (launcher: 3.4s, import torch: 4.7s, import gradio: 1.3s, setup paths: 1.0s, other imports: 1.2s, load scripts: 1.6s, create ui: 1.0s, gradio launch: 0.4s).
Creating model from config: C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\generative-models\configs\inference\sd_xl_refiner.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 7.1s (load weights from disk: 1.8s, create model: 0.3s, apply weights to model: 1.4s, apply half(): 1.3s, move model to device: 1.8s, calculate empty prompt: 0.4s).
Will process 100 images, creating 1 new images for each.
  0%|                                                                                            | 0/6 [00:03<?, ?it/s]
*** Error completing request
*** Arguments: ('task(19bmqwbr6wil1q8)', 5, 'Photo of a scuba diving Hamster wearing a diving suit and googles surrounded by exotic fish and coral deep in the ocean', '', [], None, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.25, -1.0, -1.0, 0, 0, 0, False, 0, 1024, 1024, 1, 0, 0, 32, 0, 'C:\\Users\\Mono\\Desktop\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\outputs\\txt2img-images\\2023-08-30', 'C:\\Users\\Mono\\Desktop\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\outputs\\img2img-images', '', [], False, [], '', <gradio.routes.Request object at 0x0000021B4D7A2B30>, 0, True, False, False, False, 'base', '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}    Traceback (most recent call last):
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\img2img.py", line 226, in img2img
        process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\img2img.py", line 114, in process_batch
        proc = process_images(p)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 677, in process_images
        res = process_images_inner(p)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 794, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 1381, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 434, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
        return func()
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 434, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 215, in forward
        devices.test_for_nans(x_out, "unet")
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\devices.py", line 155, in test_for_nans
        raise NansException(message)
    modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

---

Additional information

1st time it happened was when nvidia notified me of a driver update.

The last time it happened was after I generated 100 images successfully using txt2img, it generated all 100 images, but the ui froze up for 10 minutes before I manually closed the ui and cmd window and hasn't worked since. I will have to re-install to get it working again.

I have just noticed my PC has switched to game ready driver, but normally I use Studio Driver

PhotiniDev avatar Aug 31 '23 18:08 PhotiniDev

Just tested with Studio Driver, and still not working, will reinstall to get working.

PhotiniDev avatar Aug 31 '23 18:08 PhotiniDev

Same issue occasionally, please let us know if a reinstall does it for you.

Ainaemaet avatar Aug 31 '23 19:08 Ainaemaet

Having the same issue as well since the new update to 1.6 :(

nekhtiari avatar Sep 01 '23 13:09 nekhtiari

Is there an existing issue for this?

  • [x] I have searched the existing issues and checked the recent builds/commits

What happened?

Normally A1111 features work fine with SDXL Base and SDXL Refiner. But on 3 occasions over par 4-6 weeks I have had this same bug, I've tried all suggestions and A1111 troubleshoot page with no success. The only way I have successfully fixed it is with re-install from scratch. I run SDXL Base txt2img, works fine. Then I run SDXL Refiner img2img and receive the error regardless if I use "send to img2img" or "Batch img2img"

Error Message: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Screenshot (4245)

Steps to reproduce the problem

  1. Go to .... img2img
  2. Press .... Generate
  3. ... Receive Error Message Screenshot (4245)

What should have happened?

Normally when working, it will batch refine and generate all the images from the input directory into the output directory

Sysinfo

sysinfo-2023-08-31-18-35.txt

What browsers do you use to access the UI ?

Google Chrome

Console logs

venv "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Scripts\Python.exe"
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.1
Commit hash: <none>
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [7440042bbd] from C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\models\Stable-diffusion\sd_xl_refiner_1.0.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 14.7s (launcher: 3.4s, import torch: 4.7s, import gradio: 1.3s, setup paths: 1.0s, other imports: 1.2s, load scripts: 1.6s, create ui: 1.0s, gradio launch: 0.4s).
Creating model from config: C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\generative-models\configs\inference\sd_xl_refiner.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 7.1s (load weights from disk: 1.8s, create model: 0.3s, apply weights to model: 1.4s, apply half(): 1.3s, move model to device: 1.8s, calculate empty prompt: 0.4s).
Will process 100 images, creating 1 new images for each.
  0%|                                                                                            | 0/6 [00:03<?, ?it/s]
*** Error completing request
*** Arguments: ('task(19bmqwbr6wil1q8)', 5, 'Photo of a scuba diving Hamster wearing a diving suit and googles surrounded by exotic fish and coral deep in the ocean', '', [], None, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.25, -1.0, -1.0, 0, 0, 0, False, 0, 1024, 1024, 1, 0, 0, 32, 0, 'C:\\Users\\Mono\\Desktop\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\outputs\\txt2img-images\\2023-08-30', 'C:\\Users\\Mono\\Desktop\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\outputs\\img2img-images', '', [], False, [], '', <gradio.routes.Request object at 0x0000021B4D7A2B30>, 0, True, False, False, False, 'base', '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}    Traceback (most recent call last):
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\img2img.py", line 226, in img2img
        process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\img2img.py", line 114, in process_batch
        proc = process_images(p)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 677, in process_images
        res = process_images_inner(p)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 794, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 1381, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 434, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
        return func()
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 434, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 215, in forward
        devices.test_for_nans(x_out, "unet")
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\devices.py", line 155, in test_for_nans
        raise NansException(message)
    modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

---

Additional information

1st time it happened was when nvidia notified me of a driver update.

The last time it happened was after I generated 100 images successfully using txt2img, it generated all 100 images, but the ui froze up for 10 minutes before I manually closed the ui and cmd window and hasn't worked since. I will have to re-install to get it working again.

I have just noticed my PC has switched to game ready driver, but normally I use Studio Driver

now that is strange... this is what I just did!! I tried yesterday.. 1 image txt2img - then I upscaled it with img2img.. using the same settings I then set 100 images to render txt2img overnight - in the morning all images was done, but ui was not responding to clicks.. had to close it and repoen - then tried to upscale one with same settings as yesterday - dont work any more - gui broken.

A1111 really need to get things working with sdxl - I had no issues with comfyui (but I like the workflow better in A1111)

camaxide avatar Sep 01 '23 22:09 camaxide

Same issue here, ran equal setup in comfyui successfully. any idea? Best regards

wejk-ewjslkj avatar Sep 03 '23 12:09 wejk-ewjslkj

try this set COMMANDLINE_ARGS=--api --no-half-vae --disable-nan-check --xformers --opt-split-attention --medvram

LockMan007 avatar Sep 08 '23 02:09 LockMan007

try this set COMMANDLINE_ARGS=--api --no-half-vae --disable-nan-check --xformers --opt-split-attention --medvram

Could you elaborate on what this actually does? Because it seems to me that disabling the nan check isn't a good idea. If something is supposed to be there and it isn't, and we're just ignoring the check, it doesn't actually resolve the issue.

curtwagner1984 avatar Sep 09 '23 14:09 curtwagner1984

@LockMan007 Sorry, actually it's still not working on my side.

andyyeh75 avatar Sep 15 '23 02:09 andyyeh75

@LockMan007 adding only the --disable-nan-check to webui-user.bat generates only black images. Adding the whole thing as you wrote it got me this :

Traceback (most recent call last):
  File "D:\stable-diffusion-webui\launch.py", line 48, in <module>
    main()
  File "D:\stable-diffusion-webui\launch.py", line 44, in main
    start()
  File "D:\stable-diffusion-webui\modules\launch_utils.py", line 436, in start
    webui.webui()
  File "D:\stable-diffusion-webui\webui.py", line 112, in webui
    create_api(app)
  File "D:\stable-diffusion-webui\webui.py", line 22, in create_api
    api = Api(app, queue_lock)
          ^^^^^^^^^^^^^^^^^^^^
  File "D:\stable-diffusion-webui\modules\api\api.py", line 211, in __init__
    api_middleware(self.app)
  File "D:\stable-diffusion-webui\modules\api\api.py", line 148, in api_middleware
    @app.middleware("http")
     ^^^^^^^^^^^^^^^^^^^^^^
  File "D:\stable-diffusion-webui\venv\Lib\site-packages\fastapi\applications.py", line 895, in decorator
    self.add_middleware(BaseHTTPMiddleware, dispatch=func)
  File "D:\stable-diffusion-webui\venv\Lib\site-packages\starlette\applications.py", line 139, in add_middleware
    raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started

Edit : dropping the --api part seems to have fixed it on my end. Actually it's --no-half-vae that solves the initial nan bug.

Gouvernathor avatar Sep 18 '23 09:09 Gouvernathor

This is happening constantly. --no-half-vae doesn't fix it and disabling nan-check just produces black images when it effs up. switching between checkpoints can sometimes fix it temporarily but it always returns.

Someone said they fixed this bug by using launch argument --reinstall-xformers and I tried this and hours later I have not re-encountered this bug.

Flerndip avatar Sep 27 '23 16:09 Flerndip

try this set COMMANDLINE_ARGS=--api --no-half-vae --disable-nan-check --xformers --opt-split-attention --medvram

Could you elaborate on what this actually does? Because it seems to me that disabling the nan check isn't a good idea. If something is supposed to be there and it isn't, and we're just ignoring the check, it doesn't actually resolve the issue.

I don't understand it, I just know that is what I do and it works. You can try adding in parts or all and see if it works. I have it set a custom port for various reasons.

This is what my customized copy of the .bat file I run looks like:

@echo off

set PYTHON="D:\AI\Python\Python310\python.exe"
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--api --no-half-vae --disable-nan-check --xformers --opt-split-attention --medvram --port 42000
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:512



call webui.bat

The path to PYTHON may not need to be set for you and would depend on where you have it anyway.

LockMan007 avatar Sep 27 '23 21:09 LockMan007

I have the same problem, as I try to do an img2img with SDXL I get "NansException: A tensor with all NaNs was produced in Unet. ". The error is specific to SDXL, it's not present with 1.5 or others checkpoints. I tried to change every parameter, to no avail.

Sgrikkardo avatar Oct 04 '23 08:10 Sgrikkardo

This may help you.

(Setting -> Stable Diffusion -> Maximum number of checkpoints loaded at the same time) https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13020#issuecomment-1704382917

shirayu avatar Oct 05 '23 09:10 shirayu

This may help you.

(Setting -> Stable Diffusion -> Maximum number of checkpoints loaded at the same time) #13020 (comment)

I tried it, it worked but only once. I managed to obtain an img2img with SDXL, the second time I tried it was back to a NaN, I couldn't get another img2img no matter what.

Sgrikkardo avatar Oct 05 '23 12:10 Sgrikkardo

Open the root directory stable-diffusion webui, locate webui.bat, right-click to open editing

In set ERROR_ Adding the following line under 'REPORTING=FALSE' to save and restart

set COMMANDLINE_ARGS=--no-half --disable-nan-check

JoejoeC avatar Oct 10 '23 14:10 JoejoeC

  1. you need to update some things. I don't use xformers, but in my "webui-user_update.bat":
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--reinstall-torch --reinstall-xformers --xformers
git pull
call webui.bat

A. I have RTX 3090ti 24GB (with Resizable BAR activated on my ASUS motherboard) + 64GB RAM and I couldn't solve this problem for a long time, but then I did. We need to load 2 checkpoints base and refiner. So as Shirayu correctly pointed out where to look, go to "Setting -> Stable Diffusion -> Maximum number of checkpoints loaded at the same time" and set 2 instead of 1. Then restart the browser and terminal. voila, everything works.

B. Also, to speed up the process, I unchecked "Upcast cross attention layer to float32" in the same "Stable Diffusion" setting. C. And also set "Settings -> Optimization -> Cross attention optimization -> sdk-no-mem -scaled dot product without memory efficient attention" => B and C allow you to speed up the calculation process considerably!

  1. I always update extensions. After update always close browser and terminal.
  2. in "webui-user.bat":
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--xformers
call webui.bat
  1. I noticed that if I work after PC is out of sleep mode, then: VRAM is detected with bad sectors and therefore the 2nd generation in a row gives an error. But if I restart PC, then bad sectors are not detected in VRAM and everything works as it should. Thank you Windows) ======= More info:

Using Windows 10, Firefox and Vivaldi browser (both working).

I've tested on "dreamshaperXL10_alpha2Xl10.safetensors" - as SD checkpoint, "sdxl-vae-fp16-fix .safetensors" - as SD VAE, "sdXL_v10RefinerVAEFix.safetensors" - as Refiner.

Also, I have: version: v1.6.0 (AUTOMATIC1111) • python: 3.10.11 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2

"xformers" - is just an option in "Cross attention optimization" that you can select if you want to test.

P.S. ComfyUI has no such problems, but you have to get used to this interface)

riperbot avatar Oct 18 '23 16:10 riperbot

This may help you. (Setting -> Stable Diffusion -> Maximum number of checkpoints loaded at the same time) #13020 (comment)

I tried it, it worked but only once. I managed to obtain an img2img with SDXL, the second time I tried it was back to a NaN, I couldn't get another img2img no matter what.

That does not work and is not the cause of the error, I have had it set to 2 for a long time and I still get the error.

What is now the solution for this bug? All the proposed solutions don't work.

raspitakesovertheworld avatar Nov 29 '23 08:11 raspitakesovertheworld

same issue here

joli-coeur50 avatar Dec 04 '23 16:12 joli-coeur50

Settings > Stable Diffusion > check "Upcast cross attention layer to float32"

joli-coeur50 avatar Dec 04 '23 16:12 joli-coeur50

No, that setting is already set and it still does not work, getting the same error.

On Mon, Dec 4, 2023 at 8:54 AM joli-coeur50 @.***> wrote:

Settings > Stable Diffusion > check "Upcast cross attention layer to float32"

— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12921#issuecomment-1839068559, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA6BLPO7EZB5AWXX4R6CVD3YHX55XAVCNFSM6AAAAAA4GP4HVSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZZGA3DQNJVHE . You are receiving this because you commented.Message ID: @.***>

raspitakesovertheworld avatar Dec 05 '23 04:12 raspitakesovertheworld

Mac M1 Pro , I encounter the same problem and I try to input ./webui.sh --no-half, and I manage to fix this problem ! After researching the related info , I think it might because Mac doesn't support what is called "half type", this command argument is used for the cancellation. I hope this info is useful to you! 截屏2023-12-08 11 29 10

Zq5437 avatar Dec 08 '23 03:12 Zq5437

meet the same issue either.

pickou avatar Dec 09 '23 13:12 pickou

I had the same problem.

t-xl avatar Dec 13 '23 06:12 t-xl

Same here

eduardonba1 avatar Dec 16 '23 01:12 eduardonba1

Just ran into this with img2img using any SDXL checkpoint in 1.7. Launching with --no-half fixes it in in Linux here.

FWIW, Upcast cross attention layer to float32 did not make a difference. --disable-nan-check just generated black images.

nathanshipley avatar Dec 17 '23 15:12 nathanshipley

The problem is still present in 1.7. As previously pointed out, --no-half prevents the NaNs, but not having access to fp16 calculations is a problem which is still not addressed. For now I just generate a small random image in txt2img and then I can use img2img in half precision with no errors, but it's a workaround, not a solution

Sgrikkardo avatar Dec 18 '23 11:12 Sgrikkardo

Sometimes Changing models works , but there is no permanent solution to this

Gokhalesh avatar Dec 19 '23 20:12 Gokhalesh

This is ongoing with the latest install script... Running gentoo none of the mentioned fixes work in seemingly any combination.

tmheath avatar Dec 20 '23 03:12 tmheath

I have referred to the suggestions in the comments, but the error still occurs. Is there any way to solve it? Thank you very much!

Issue: modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Uploading 螢幕擷取畫面 2023-12-25 093710.jpg…

dadadaing10 avatar Dec 25 '23 02:12 dadadaing10

I confirm this bug too- for SDXL models do any (empty) txt2img before to do img2img fixes it!

AngelTs avatar Dec 26 '23 06:12 AngelTs