stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: NansException: A tensor with all NaNs was produced in Unet. Use --disable-nan-check commandline argument to disable this check.

Open lightfuryturtle opened this issue 1 year ago • 33 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

After doing a clean install of stable diffusion ive been getting this error nonstop, even after changing and redownloading models, including the official SD1.5 Model, ive been getting the same errors

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

Ive been able to generate up until last week severeal images at a time

Commit where the problem happens

https://github.com/AUTOMATIC1111/stable-diffusion-webui

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

set COMMANDLINE_ARGS= --opt-split-attention --precision full --no-half --lowvram --xformers --autolaunch

List of extensions

No

Console logs

venv "D:\AI\StableDiffusion\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep  5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI
Launching Web UI with arguments: --opt-split-attention --precision full --no-half --lowvram --xformers --autolaunch
Loading weights [f0c9cfc1ab] from D:\AI\StableDiffusion\stable-diffusion-webui\models\Stable-diffusion\anything-v4.0-pruned-fp16.safetensors
Creating model from config: D:\AI\StableDiffusion\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 110.8s (load weights from disk: 5.7s, create model: 2.1s, apply weights to model: 102.9s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 178.3s (import torch: 23.2s, import gradio: 13.6s, import ldm: 6.3s, other imports: 16.9s, setup codeformer: 0.5s, list builtin upscalers: 0.1s, load scripts: 3.1s, load SD checkpoint: 111.0s, create ui: 0.7s, gradio launch: 2.8s).
  0%|                                                                                           | 0/20 [00:25<?, ?it/s]
Error completing request
Arguments: ('task(hdsnwqtb200kz0t)', 'gamer', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 653, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 869, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 358, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in launch_sampling
    return func()
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 358, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\AI\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 152, in forward
    devices.test_for_nans(x_out, "unet")
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\devices.py", line 152, in test_for_nans
    raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. Use --disable-nan-check commandline argument to disable this check.

Closing server running on port: 7860
Restarting UI...
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 2.7s (load scripts: 0.7s, reload script modules: 0.2s, create ui: 1.5s, gradio launch: 0.2s).
  0%|                                                                                           | 0/20 [00:06<?, ?it/s]
Error completing request
Arguments: ('task(0lfg64zi27r1fp2)', 'GG bro', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, 'MultiDiffusion', False, 10, 1, 1, 64, False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, False, True, True, False, 960, 64, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 653, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 869, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 358, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in launch_sampling
    return func()
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 358, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "D:\AI\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 152, in forward
    devices.test_for_nans(x_out, "unet")
  File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\devices.py", line 152, in test_for_nans
    raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. Use --disable-nan-check commandline argument to disable this check.

Additional information

No response

lightfuryturtle avatar Apr 02 '23 11:04 lightfuryturtle

this isn't an issue with automatic1111, just the model/cfg combo and vram combo of your own system. Do you get this error consistently over all models/resolutions/cfg settings?

razvan-nicolae avatar Apr 02 '23 12:04 razvan-nicolae

this isn't an issue with automatic1111, just the model/cfg combo and vram combo of your own system. Do you get this error consistently over all models/resolutions/cfg settings?

i have same question. And when i set --disable-nan-check to commandline, i can just generate black images, no matter what sampling method i chose. I have read troubleshoot in wiki but there is not an answer, did anyone know how to solve it?

JesuisTong avatar Apr 02 '23 13:04 JesuisTong

ive tested multiple different Safetensor files now,all give the same error, i used to be able to generate upwards of 25 images in a batch so Vram is not the issue

lightfuryturtle avatar Apr 02 '23 13:04 lightfuryturtle

this isn't an issue with automatic1111, just the model/cfg combo and vram combo of your own system. Do you get this error consistently over all models/resolutions/cfg settings?

I disagree. I have about 20 models, all of them worked flawlessly up until an update mid last week, at that point half of them gave this very exact error report here. the only thing that changed was the auto git pull i do when i start automatic1111 updated me. thats it, nothing else changed. so I have to say i disagree with your comment.

edit to include: it was the update directly after the one last week that broke the model select bar.

zethfoxster avatar Apr 02 '23 13:04 zethfoxster

Then the best fix is to git fetch a commit from a week ago or two, easy fix. I had to do the same current commit has big UI problems.

It’s possible issues are linked and it’s just more UI problems from the new update?

On 2 Apr 2023, at 16:47, zethfoxster @.***> wrote:



this isn't an issue with automatic1111, just the model/cfg combo and vram combo of your own system. Do you get this error consistently over all models/resolutions/cfg settings?

I disagree. I have about 20 models, all of them worked flawlessly up until an update mid last week, at that point half of them gave this very exact error report here. the only thing that changed was the auto git pull i do when i start automatic1111 updated me. thats it, nothing else changed. so I have to say i disagree with your comment.

— Reply to this email directly, view it on GitHubhttps://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9294#issuecomment-1493339010, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AESH55U73LYQ2OTXD563OL3W7F7OPANCNFSM6AAAAAAWQH2E4A. You are receiving this because you commented.Message ID: @.***>


Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you.

razvan-nicolae avatar Apr 02 '23 14:04 razvan-nicolae

that seems like the only option rn, ill test that out n see if that fixes the issues

lightfuryturtle avatar Apr 02 '23 14:04 lightfuryturtle

alright did a git checkout to git checkout a9fed7c, but im still getting the errors,at this point i have no clue what is going wrong, even the SD 1.4 pruned model is giving the same error

lightfuryturtle avatar Apr 02 '23 16:04 lightfuryturtle

Smells like reinstall time.

On 2 Apr 2023, at 19:00, lightfuryturtle @.***> wrote:



alright did a git checkout to git checkout a9fed7chttps://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/a9fed7c364061ae6efb37f797b6b522cb3cf7aa2, but im still getting the errors,at this point i have no clue what is going wrong, even the SD 1.4 pruned model is giving the same error

— Reply to this email directly, view it on GitHubhttps://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9294#issuecomment-1493379196, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AESH55SXU2G44SFAR6JD7FLW7GPBNANCNFSM6AAAAAAWQH2E4A. You are receiving this because you commented.Message ID: @.***>


Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you.

razvan-nicolae avatar Apr 02 '23 19:04 razvan-nicolae

I'm skeptical that the issue is connected to the auto-update feature. I turned off 'git pull' last week because of a faulty version, yet Python packages continue to update upon startup. I suspect one of these packages might be the cause.

FatGuy84 avatar Apr 03 '23 21:04 FatGuy84

same. i tested out using NMKDs gui and i have no issues whatsoever,even with out any command settings for VRAM use

lightfuryturtle avatar Apr 04 '23 03:04 lightfuryturtle

i tried out an automatic installer from civitai and so far only the base SD ema pruned model is working, all other models have this issue

On Tue, Apr 4, 2023 at 2:39 AM FatGuy84 @.***> wrote:

I'm skeptical that the issue is connected to the auto-update feature. I turned off 'git pull' last week because of a faulty version, yet Python packages continue to update upon startup. I suspect one of these packages might be the cause.

— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9294#issuecomment-1494983043, or unsubscribe https://github.com/notifications/unsubscribe-auth/API2YB4K37GJPOPOQ4YDWUTW7M4B3ANCNFSM6AAAAAAWQH2E4A . You are receiving this because you authored the thread.Message ID: @.***>

lightfuryturtle avatar Apr 04 '23 05:04 lightfuryturtle

I had to pull a0d07fb5 to get it working on all my models again. kind of sucks cause as fast as all this is moving, getting stuck on a version 3 weeks ago is like ancient

zethfoxster avatar Apr 04 '23 23:04 zethfoxster

a0d08fb5 isn't working for me, unfortunately.

jtran-developer avatar Apr 05 '23 03:04 jtran-developer

a0d08fb5 isn't working for me, unfortunately.

me neither

BetterBeDoing avatar Apr 07 '23 10:04 BetterBeDoing

Might be a dumb question, but are we sure the right eye are on this? It's been while and I haven't heard of any progress being made on it or even an explanation.

jtran-developer avatar Apr 07 '23 14:04 jtran-developer

I think i got what the issue was, there were a bunch of Pip and Python files left in my /Appdata and local directory, after i deleted them and reinstalled, so far there have been no issues

lightfuryturtle avatar Apr 07 '23 17:04 lightfuryturtle

Well, as a google colab user, that solution won't work for me. There are no local files outside of the SD folder.

jtran-developer avatar Apr 07 '23 18:04 jtran-developer

from my side, i change to torch==1.13.1 torchvision==0.14.1, errors disappear, works well

xinbing avatar Apr 08 '23 03:04 xinbing

At least on my end I've managed to narrow this down to xformers. I haven't tested too much but I know it's at least since xformers 0.18. My last known working commit was 658ebab and I have not tested more recent ones since they break and there's little to no improvements since this commit. It looks like a fix is already planned. https://github.com/facebookresearch/xformers/issues/719

If you don't want to wait, from your install directory: source venv/bin/activate pip install ninja pip uninstall xformers pip install -v -U git+https://github.com/facebookresearch/xformers.git@658ebab#egg=xformers

Otherwise, unless you know you need xformers due to VRAM constraints with what you wish to produce, another good option is to edit your webui-user.sh or webui-user.bat adding --opt-sdp-attention to commandline args.

Miyuutsu avatar Apr 11 '23 02:04 Miyuutsu

it's to do with xformers 0.0.18.

to fix, you can run pip install xformers==0.0.17

no need for rebuilding.

Xynonners avatar Apr 12 '23 09:04 Xynonners

it's to do with xformers 0.0.18.

to fix, you can run pip install xformers==0.0.17

no need for rebuilding.

This worked, but when just running this it wants to reinstall ALL of xformers dependencies includeing torch 2.0 which I can't run on my servers. What worked for me was to follow the directions on the Dreambooth install prompt from the command line.

Download the xformers v17dev from facebook. I went here and matched my torch and CUDA versions https://github.com/facebookresearch/xformers/actions/runs/4056598357 Unzip the xformers wheel and get it to where you need to install it

source venv/bin/activate
pip install xformers-0.0.17.dev435-cp310-cp310-manylinux2014_x86_64.whl

Your version will most likely be different. This will ONLY install the xformers that is needed, Dreambooth likes this version as well. Just had my customers test this and we're rocking and rollin'! Thanks @Xynonners and @Miyuutsu! Couldn't have done it without ya!

rundiffusion avatar Apr 14 '23 17:04 rundiffusion

it's to do with xformers 0.0.18.

to fix, you can run pip install xformers==0.0.17

no need for rebuilding.

You did a great job!

Coloured-glaze avatar Apr 18 '23 09:04 Coloured-glaze

I am not use xformers but also get the same problem

zxm9988 avatar Apr 19 '23 01:04 zxm9988

I removing all the model and reisntall it

zxm9988 avatar Apr 23 '23 01:04 zxm9988

i follow the error suggestion, add the commandline argument "--disable-nan-check" into webui-user.sh, it works

fongfiafia avatar Apr 30 '23 06:04 fongfiafia

  0%|                                                                                           | 0/15 [00:03<?, ?it/s]
Error completing request
Arguments: ('task(c0zo83vyut8gim0)', 'highly detailed landscape, masterpiece', '__negative__', [], 15, 0, False, False, 1, 1, 4, -1.0, -1.0, 0, 0, 0, False, 680, 640, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, 0, '<span>(No stats yet, run benchmark in VRAM Estimator tab)</span>', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, False, 'positive', 'comma', 0, False, False, '', '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "C:\ai\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\ai\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\ai\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "C:\ai\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "C:\ai\stable-diffusion-webui\modules\processing.py", line 653, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "C:\ai\stable-diffusion-webui\modules\processing.py", line 869, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 358, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in launch_sampling
    return func()
  File "C:\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 358, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\ai\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 152, in forward
    devices.test_for_nans(x_out, "unet")
  File "C:\ai\stable-diffusion-webui\modules\devices.py", line 152, in test_for_nans
    raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

100%|██████████████████████████████████████████████████████████████████████████████████| 15/15 [01:01<00:00,  4.10s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 15/15 [01:00<00:00,  4.06s/it]

Got this too, but only sporadically, I did many images with the exact same settings, but one time it just throw this error, I restarted generation, and it was fine.

my xformers version is 0.0.16rc425 automaticac1111 hash 22bcc7be428c94e9408f589966c2040187245d81

janwilmans avatar May 01 '23 07:05 janwilmans

您好,来件已收到,我会尽快给您回复。

fongfiafia avatar May 01 '23 07:05 fongfiafia

Note: I have modest hardware (GTX1660Super with 6GB) and was generating at 640x512, which might be need to limit of what is possible on this hardware.

janwilmans avatar May 01 '23 08:05 janwilmans

i follow the error suggestion, add the commandline argument "--disable-nan-check" into webui-user.sh, it works

two news, bad one is that if you add the commandline i mentioned above, you will always get a black picture result....

good one is that i finally know the reason.. i should put lora model into /stable diffusion dir/models/Lora/ and use stable diffusion model , instead of using lora model directly. and you prompt should like " <lora:「lora model name」:1>" ...

e.g pic: image

fongfiafia avatar May 01 '23 09:05 fongfiafia

Interesting, I was also using the revAnimated model, and also a lora, but I was already using it like you suggested, but still getting the error.

janwilmans avatar May 01 '23 09:05 janwilmans