stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Torch is not able to use GPU

Open NarniaEXE opened this issue 2 years ago • 17 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

I tried to install stable diffusion

Steps to reproduce the problem

I sadly dont know how i can fix this

What should have happened?

it should have started and I would got access to the program

Commit where the problem happens

Tried to start it

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

No response

Command Line Arguments

D:\Stabel-Diffusion\stable-diffusion-webui>git pull
Already up to date.
venv "D:\Stabel-Diffusion\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 0b8911d883118daa54f7735c5b753b5575d9f943
Traceback (most recent call last):
  File "D:\Stabel-Diffusion\stable-diffusion-webui\launch.py", line 307, in <module>
    prepare_environment()
  File "D:\Stabel-Diffusion\stable-diffusion-webui\launch.py", line 221, in prepare_environment
    run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
  File "D:\Stabel-Diffusion\stable-diffusion-webui\launch.py", line 88, in run_python
    return run(f'"{python}" -c "{code}"', desc, errdesc)
  File "D:\Stabel-Diffusion\stable-diffusion-webui\launch.py", line 64, in run
    raise RuntimeError(message)
RuntimeError: Error running command.
Command: "D:\Stabel-Diffusion\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout: <empty>
stderr: Traceback (most recent call last):
  File "<string>", line 1, in <module>
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Additional information, context and logs

I tried to add some things to the webui-user.bat like : "set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test"

Didn't work

NarniaEXE avatar Jan 12 '23 03:01 NarniaEXE

What is your GPU?

mezotaken avatar Jan 12 '23 11:01 mezotaken

What is your GPU?

8GB ASROCK RX580 PHANTOM GAMING OC

NarniaEXE avatar Jan 12 '23 13:01 NarniaEXE

~~AMD + Windows is barely supported atm. Consult the following: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs Pay attention that only link in the first sentence is for windows.~~ Ok, correction, that is for installation of original repo without this whole UI. Should be fixed ASAP, looks like AMD + Windows is not supported, but you can run it on CPU. add --skip-torch-cuda-test --precision full --no-half to COMMANDLINE_ARGS in webui-user.bat

mezotaken avatar Jan 12 '23 13:01 mezotaken

~AMD + Windows is barely supported atm. Consult the following: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs Pay attention that only link in the first sentence is for windows.~ Ok, correction, that is for installation of original repo without this whole UI. Should be fixed ASAP, looks like AMD + Windows is not supported, but you can run it on CPU. add --skip-torch-cuda-test --precision full --no-half to COMMANDLINE_ARGS in webui-user.bat

I already tried --skip-torch-cuda-test --precision full --no-half it didn't worked out

NarniaEXE avatar Jan 12 '23 14:01 NarniaEXE

Then --skip-torch-cuda-test is not passed somehow. run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") This line can be executed only if there is no such parameter in argv. Can you give me a screenshot of your webui-user.bat file and, after running it, full logs from the moment cmd opens to the end. Post logs on pastebin.com or somewhere. Actually you can post webui-user.bat on pastebin as well.

mezotaken avatar Jan 12 '23 14:01 mezotaken

Then --skip-torch-cuda-test is not passed somehow. run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") This line can be executed only if there is no such parameter in argv. Can you give me a screenshot of your webui-user.bat file and, after running it, full logs from the moment cmd opens to the end. Post logs on pastebin.com or somewhere. Actually you can post webui-user.bat on pastebin as well.

image

https://pastebin.com/wkZybtHi

NarniaEXE avatar Jan 12 '23 16:01 NarniaEXE

Seems all along fine to me, i just was a bit confused by auto git pull earlier, but now it makes sense. Try to print it out after this line: https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/0b8911d883118daa54f7735c5b753b5575d9f943/launch.py#L178 print(commandline_args) Sanity check. It should contain the argument. If it's empty, then the error lies somewhere in webui.bat, if it's not empty, then i do not understand how you can get the error message. https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/0b8911d883118daa54f7735c5b753b5575d9f943/launch.py#L205 https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/0b8911d883118daa54f7735c5b753b5575d9f943/launch.py#L220-L221

mezotaken avatar Jan 12 '23 16:01 mezotaken

I have this same exact issue, and what you've said is great and all, except that images can't be generated with a cpu like that of a GPU.

Moses2917 avatar Feb 02 '23 09:02 Moses2917

If someone is getting core dump in arch linux with the rx 580 or similar GPU, the solution for me was to remove the "sentencepiece" folder from "/home/bionagato/.local/lib/python3.10/site-packages/" that was created after Automatic1111 installed open clip.

Bionagato avatar Feb 04 '23 03:02 Bionagato

I have smae issue too

Gi1K avatar Feb 07 '23 01:02 Gi1K

I have same issue with my 4090, latest drivers. All this after installing Composable LoRA and Latent Couple

niatro avatar Mar 02 '23 21:03 niatro

Same problem here with a 3070 Ti after installing Composable LoRA and Latent Couple

rukaiko avatar Mar 04 '23 17:03 rukaiko

It is really annoying. Some idea?

niatro avatar Mar 04 '23 17:03 niatro

Deleting the venv folder and reinstalling it solved the problem for now

rukaiko avatar Mar 05 '23 11:03 rukaiko

If someone is getting core dump in arch linux with the rx 580 or similar GPU, the solution for me was to remove the "sentencepiece" folder from "/home/bionagato/.local/lib/python3.10/site-packages/" that was created after Automatic1111 installed open clip.

Could you elaborate on this? I have an rx 580 8gb and I can't figure out why it won't work, my best guess is that this gpu model isn't supported by arch dependancies? As mentioned in the limitations part here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

outget avatar Mar 09 '23 01:03 outget

Same problem with GT-1660 super 6GB, linux mint 21.1. Deleting the venv folder and reinstalling not helped.

Traceback (most recent call last): File "/media/SSD/stable-diffusion-webui/launch.py", line 380, in <module> prepare_environment() File "/media/SSD/stable-diffusion-webui/launch.py", line 287, in prepare_environment run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "/media/SSD/stable-diffusion-webui/launch.py", line 137, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "/media/SSD/stable-diffusion-webui/launch.py", line 113, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "/media/SSD/stable-diffusion-webui/venv/bin/python3" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 134 stdout: <empty> stderr: "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!" Aborted (core dumped)

With --skip-torch-cuda-test i have another error Launching Web UI with arguments: "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"

sakralbar avatar Mar 13 '23 08:03 sakralbar

I was able to resolve this by doing export COMMANDLINE_ARGS="--skip-torch-cuda-test" before running webui.sh

krish567 avatar Jul 03 '23 08:07 krish567