stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Open Giro06 opened this issue 2 years ago • 245 comments

when i try to run webui-user.bat this error shown.

venv "C:\Users\giray\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: 67d011b02eddc20202b654dfea56528de3d5edf7 Traceback (most recent call last): File "C:\Users\giray\stable-diffusion-webui\launch.py", line 110, in run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "C:\Users\giray\stable-diffusion-webui\launch.py", line 60, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\Users\giray\stable-diffusion-webui\launch.py", line 54, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\Users\giray\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Giro06 avatar Oct 05 '22 18:10 Giro06

In the launch.py line 15, change to commandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test") thus adding the --skip-torch-cuda-test to COMMANDLINE_ARGS as stated in the error message.

I also had to add --precision full --no-half . HOWEVER I am unable to run this on a AMD 5700 XT GPU and it defaults to using CPU only. Seems like a lot of others have this same issue.

DudeShift avatar Oct 05 '22 18:10 DudeShift

For some reason setting the command line arguments in launch.py did not work for me. However setting them in the webui-user.sh script did the trick.

lechu1985 avatar Oct 05 '22 20:10 lechu1985

or in the file "webui-user.bat", change to "set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test"

In this other project, if there is no nvidia GPU, the operation is done in the CPU, without the need to specify any initial parameters. It would be nice to see how they do it: https://github.com/cmdr2/stable-diffusion-ui/

maikelsz avatar Oct 05 '22 21:10 maikelsz

For some reason setting the command line arguments in launch.py did not work for me. However setting them in the webui-user.sh script did the trick.

@lechu1985 How did you do that ? in webui-user.sh there was no such variable.

If I add in launch.py then i get error

launch.py: error: unrecognized arguments: --skip-torch-cuda-test

zippy-zebu avatar Oct 06 '22 17:10 zippy-zebu

in webui-user.sh line 8:

# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--skip-torch-cuda-test"

maniac-0s avatar Oct 06 '22 17:10 maniac-0s

in webui-user.sh line 8:

# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--skip-torch-cuda-test"

or "webui-user.bat", if you are in Windows. like this: set COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test

maikelsz avatar Oct 06 '22 18:10 maikelsz

or in the file "webui-user.bat", change to "set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test"

In this other project, if there is no nvidia GPU, the operation is done in the CPU, without the need to specify any initial parameters. It would be nice to see how they do it: https://github.com/cmdr2/stable-diffusion-ui/

thank you

atomboy1653 avatar Oct 09 '22 11:10 atomboy1653

But this still doesn't solve an ensuing issue: that by adding --precision full --no-half, your SD will use the CPU instead of the GPU, which reduces your performance drastically, which defeats the entire purpose.

So the root issue that needs to be addressed is - why is pytorch not detecting the GPU in the first place?

tpiatan avatar Oct 09 '22 13:10 tpiatan

I had the same problem. I try to solve the problem by google, maybe my graphics card is too old (GTX 950M,roughly equivalent to GTX750) and use CUDA 10.2. I guess is the Torch version doesn't match my CUDA version?

Lan-megumi avatar Oct 10 '22 15:10 Lan-megumi

  1. same problem here ryzen 7 5800x with rx 6800
  2. just trying to install is there a fix yet or is my pc not campatible o.a.?
  3. thx for help

TRIBVTES avatar Oct 11 '22 00:10 TRIBVTES

I had the same problem. I try to solve the problem by google, maybe my graphics card is too old (GTX 950M,roughly equivalent to GTX750) and use CUDA 10.2. I guess is the Torch version doesn't match my CUDA version?

is it only gonna work with nvidia cards not radeon?

TRIBVTES avatar Oct 11 '22 00:10 TRIBVTES

same problem here but my set is 12700kf with gtx1080ti, which should be compatible with the default torch version and cuda 11.8 right? Or probably the torch version does not compatible with win 11 and cuda 11?

y1052895290 avatar Oct 11 '22 05:10 y1052895290

Same problem here, I have Ryzen 5

Rymegu avatar Oct 11 '22 08:10 Rymegu

I had the same problem. I try to solve the problem by google, maybe my graphics card is too old (GTX 950M,roughly equivalent to GTX750) and use CUDA 10.2. I guess is the Torch version doesn't match my CUDA version?

is it only gonna work with nvidia cards not radeon?

Maybe...

there is another one use Radeon RX 5700 and does not work

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2191

Lan-megumi avatar Oct 11 '22 11:10 Lan-megumi

thank.COMMANDLINE_ARGS=--skip-torch-cuda-test.Very helpful

HA-JD avatar Oct 11 '22 15:10 HA-JD

~~same issue, last working commit with xformers confirmed working was https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/9d33baba587637815d818e5e641d8f8b74c4900d~~

~~To those in this thread, confirm by trying git checkout 9d33baba587637815d818e5e641d8f8b74c4900d then rerun webui-user.bat~~

~~Don't use the full precision or low-vram stuff unless you don't want to use your GPU or have reduced memory. 2080 Super 8GB / Windows 10~~

Update: Try deleting the venv folder and running the webui-user.bat again. That seemed to get it working again for me.

aphix avatar Oct 11 '22 16:10 aphix

CUDA is an NVIDIA-proprietary software and only works with NVIDIA GPUs. So to everyone who has AMD that is wondering why your GPU isn't recognized..

rothej avatar Oct 14 '22 02:10 rothej

But I thought it would work in Windows even with this ROCM pytorch? Guess I'll have to switch to Linux..

pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1

ajalberd avatar Oct 14 '22 04:10 ajalberd

Guess I'll have to switch to Linux.

Can confirm on linux that ROCm pytorch works with AMD GPUs. Dual booted to EndeavourOS (Arch) and Stable Diffusion Native Isekai Too Guide using the arch4edu ROCm pytorch.

Getting 2.95~3 IT/s on a RX 5700 XT.

DudeShift avatar Oct 14 '22 16:10 DudeShift

or in the file "webui-user.bat", change to "set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test"

In this other project, if there is no nvidia GPU, the operation is done in the CPU, without the need to specify any initial parameters. It would be nice to see how they do it: https://github.com/cmdr2/stable-diffusion-ui/

This totally worked!Thanks!

fesolla avatar Oct 18 '22 16:10 fesolla

if i will comment it will be everything okay? : #if not skip_torch_cuda_test: # run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") I mean my notebookwill not burn.

MetaphysicsNecrosis avatar Oct 18 '22 17:10 MetaphysicsNecrosis

Shouldn't this be added to the https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs wiki page at least?

skerit avatar Oct 19 '22 18:10 skerit

Could someone explain how to fix this error to me in laymen's terms?

omni002 avatar Oct 23 '22 22:10 omni002

@omni002 CUDA is an NVIDIA-proprietary software for parallel processing of machine learning/deeplearning models that is meant to run on NVIDIA GPUs, and is a dependency for StableDiffision running on GPUs. If you have an AMD GPU, when you start up webui it will test for CUDA and fail, preventing you from running stablediffusion. The workaround adding --skip-torch-cuda-test skips the test, so the cuda startup test will skip and stablediffusion will still run. Because you still can't run CUDA on your AMD GPU, it will default to using the CPU for processing which will take much longer than parallel processing on a GPU would take.

It looks like some people have been able to get their AMD cards to run stablediffusion by using ROCm pytorch on the linux OS, but doesn't appear to work on Windows from what people are commenting in here. I have no idea how to set that up and I am sure it is a pain in the ass, so maybe they can chime in on the specifics. @DudeShift

rothej avatar Oct 23 '22 23:10 rothej

Thanks but I meant could someone explain in simple step by step laymen's terms how I add the line.

omni002 avatar Oct 23 '22 23:10 omni002

@omni002 Edit webui-user.bat, where it says: COMMANDLINE_ARGS= and change it to: COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test

Edit: The above assumes windows, if linux then add the line to webui-user.sh, and use quotes, may also need to delete the /venv folder based on others' comments: COMMANDLINE_ARGS="--lowvram --precision full --no-half --skip-torch-cuda-test"

rothej avatar Oct 23 '22 23:10 rothej

Thanks that seems to have fixed it.

omni002 avatar Oct 24 '22 00:10 omni002

I cannot help with the Radeon folks, but this happens to me when my computer wakes up from sleep/being suspended. I found my issue on this pytorch forum: https://discuss.pytorch.org/t/cuda-fails-to-reinitialize-after-system-suspend/158108

TL;DR

sudo rmmod nvidia_uvm
sudo modprobe nvidia_uvm

This has worked for me with my RTX 3090, CUDA 11.7, and NVIDIA drivers 515.65.01

As others have said - if you do --skip-torch-cuda-test then you'll be running SD on your CPU which defeats the purpose of having the card in the first place

EDIT I recognize now that the original poster is on a Windows machine and I proposed a Linux based solution. I hope it helps others who come here but I should've noticed that sooner!

pypeaday avatar Oct 31 '22 14:10 pypeaday

im having the same error but i am using nvidia?

JonJoeYT avatar Nov 10 '22 21:11 JonJoeYT

ive managed to use the solution above but i would much prefer to use the GPU if theres a possible solution for me, i am using Nvidia Geforce GTX

JonJoeYT avatar Nov 10 '22 21:11 JonJoeYT