stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

Torch is not able to use GPU

Open afrofail opened this issue 2 years ago • 16 comments

Followed all simple steps, can't seem to get passed Installing Torch, it only installs for a few minutes, then I try to run the webui-user.bat and receive "Torch is not able to use GPU"

First time I open webui-user.bat

Creating venv in directory venv using python "C:\Users(User)\AppData\Local\Programs\Python\Python310\python.exe" venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: Installing torch Traceback (most recent call last): File "C:\stable-diffusion-webui\launch.py", line 96, in run(f'"{python}" -m {torch_command}', "Installing torch", "Couldn't install torch") File "C:\stable-diffusion-webui\launch.py", line 44, in run raise RuntimeError(message) RuntimeError: Couldn't install torch. Command: "C:\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 Error code: 1 stdout: Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu113 Collecting torch==1.12.1+cu113 Using cached https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-win_amd64.whl (2143.8 MB) Collecting typing-extensions Using cached typing_extensions-4.3.0-py3-none-any.whl (25 kB) Installing collected packages: typing-extensions, torch Successfully installed torch-1.12.1+cu113 typing-extensions-4.3.0

stderr: [notice] A new release of pip available: 22.2.1 -> 22.2.2 [notice] To update, run: C:\stable-diffusion-webui\venv\Scripts\python.exe -m pip install --upgrade pip

Press any key to continue . . .

2nd time I open webui-user.bat

venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: Traceback (most recent call last): File "C:\stable-diffusion-webui\launch.py", line 98, in run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU'") File "C:\stable-diffusion-webui\launch.py", line 50, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\stable-diffusion-webui\launch.py", line 44, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU'" Error code: 1 stdout: stderr:

Press any key to continue . . .

Anyone find a fix for this, I've tried reinstalling CUDA and Pytorch, edited launch.py parameters with some suggestions without any luck.

afrofail avatar Sep 21 '22 07:09 afrofail

I believe AUTOMATIC1111 fixed the issue minutes after you posted: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/45c46f4cb3d6924882bd944712be168c7c2f605d Did the issue fix itself - after you did a git pull?

kiancn avatar Sep 21 '22 07:09 kiancn

I believe AUTOMATIC1111 fixed the issue minutes after you posted: 45c46f4 Did the issue fix itself - after you did a git pull?

I did run the git pull. Unfortunately I have the same errors.

afrofail avatar Sep 21 '22 08:09 afrofail

Creating venv in directory venv using python "C:\Users\User\AppData\Local\Programs\Python\Python310\python.exe" venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: Installing torch Traceback (most recent call last): File "C:\stable-diffusion-webui\launch.py", line 106, in run(f'"{python}" -m {torch_command}', "Installing torch", "Couldn't install torch") File "C:\stable-diffusion-webui\launch.py", line 54, in run raise RuntimeError(message) RuntimeError: Couldn't install torch. Command: "C:\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 Error code: 1 stdout: Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu113 Collecting torch==1.12.1+cu113 Using cached https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-win_amd64.whl (2143.8 MB) Collecting typing-extensions Using cached typing_extensions-4.3.0-py3-none-any.whl (25 kB) Installing collected packages: typing-extensions, torch Successfully installed torch-1.12.1+cu113 typing-extensions-4.3.0

stderr: [notice] A new release of pip available: 22.2.1 -> 22.2.2 [notice] To update, run: C:\stable-diffusion-webui\venv\Scripts\python.exe -m pip install --upgrade pip

Press any key to continue . . .

venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: Traceback (most recent call last): File "C:\stable-diffusion-webui\launch.py", line 109, in run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDINE_ARGS variable to disable this check'") File "C:\stable-diffusion-webui\launch.py", line 60, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\stable-diffusion-webui\launch.py", line 54, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDINE_ARGS variable to disable this check'" Error code: 1 stdout: stderr:

Press any key to continue . . .

afrofail avatar Sep 21 '22 09:09 afrofail

Make sure that line 109 in "launch.py" reads exactly as follows: run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDINE_ARGS variable to disable this check'")

echinopsis42 avatar Sep 21 '22 09:09 echinopsis42

Make sure that line 109 in "launch.py" reads exactly as follows: run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDINE_ARGS variable to disable this check'")

Triple checked, I just posted the updated error, the only change are the lines of the error.

afrofail avatar Sep 21 '22 09:09 afrofail

go to "launch.py" and where it says "COMMANDLINE_ARGS" add --skip-torch-cuda-test it should look like this commandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test")

Zaidbaidadekalb avatar Sep 21 '22 22:09 Zaidbaidadekalb

Yeah the above works. I think it should be added to the wiki. Took me a while to figure it that it needed to be added to launch.py.

Also, COMMANDINE_ARGS Line 111 in launch.py is missing an "L". Simply typo, but thought I should mention it.

weddi-eddy avatar Sep 26 '22 15:09 weddi-eddy

I am sorry to admit that I am not very good with cmd prompt. Never learned how to use it. I am having the same problem. It never gets past runtime error.

venv "C:\Users\goods\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] Commit hash: f2a4a2c3a672e22f088a7455d6039557370dd3f2 Traceback (most recent call last): File "C:\Users\goods\stable-diffusion-webui\launch.py", line 111, in run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "C:\Users\goods\stable-diffusion-webui\launch.py", line 61, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\Users\goods\stable-diffusion-webui\launch.py", line 55, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\Users\goods\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 1 stdout: stderr: C:\Users\goods\stable-diffusion-webui\venv\lib\site-packages\torch\cuda_init_.py:83: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:109.) return torch._C._cuda_getDeviceCount() > 0 Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check I am so ignorant in the use of cmd prompt I don't know how to enter the fix you suggest. It keeps telling me to hit any key and then the window closes. Can anyone help me?

z46rt avatar Sep 28 '22 05:09 z46rt

Edit your C:\Users\goods\stable-diffusion-webui\launch.py file in Notepad.

Where you see commandline_args = os.environ.get('COMMANDLINE_ARGS', ""), make it look like commandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test"). Save and try again. This only skips the check.

weddi-eddy avatar Sep 28 '22 08:09 weddi-eddy

Thanks so much for the help!! I now have Stable diffusion running at http://127.0.0.1:7860/. When I run a prompt it does not create anything. It gives me a runtime error- RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' Time taken: 0.01s Have I installed it wrong? What does this error mean? How do I fix it?

z46rt avatar Sep 28 '22 21:09 z46rt

Hi. I added "--skip-torch-cuda-test" but it still doesnt work.pls help

quesSimons avatar Nov 14 '22 19:11 quesSimons

Broken

I just encountered this, and it bears mentioning what I had to do to fix this. FWIW, I am using git-bash on Windows. I had a working installation running and generating images and then I upgraded CUDA to version 11.8... Then everything stopped working with the above errors.

I am using virtualenv, but you may do a similar thing with conda.

CUDA version

The version of pytorch is directly related to your installed CUDA version. If you change CUDA, you need to reinstall pytorch. The default version appears to be 11.3. I got it working with 11.6 by modifying the line in launch.py and running it manually.

First things first, install a compatible version of CUDA. 11.8 does not appear to be compatible, but 11.3 and 11.6 definitely are, according to the pytorch website.

Restart your machine

It's terrible, but trust me. Just restart after installing the compatible CUDA.

Remove your old python packages (optional)

I think that this can be skipped, but I did this on my machine. Worst case is that it takes a few minutes to reinstall and some wear cycles on your SSD.

rm -rf venv/Lib/site-packages/*

Reinstall python dependencies

source venv/Scripts/activate && python -m ensurepip --upgrade && python -m pip install --upgrade pip && python -m pip install -r requirements.txt

This should get you almost ready to go.

The final fix

You need to install pytorch again. It is manually installed in launch.py, but if you change CUDA, you gotta rebuild pytorch. I think that this repo depends on version 1.12.1, but when I ran the install in the last step, it installed version 1.14. Once I downgraded, it seemed to work again. I used version 11.6, but if you are using 11.3, change cu116 to cu113:

pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116

Start webui

python launch.py $YOUR_START_ARGS

Then it started working again. Hope this helps!

penguincoder avatar Dec 06 '22 14:12 penguincoder

related, as of this writing, python 3.11.x is not supported, so you need 3.10.x, and then delete the entire venv folder, relauching webui-user.bat will reinstall all the needed things. see #4345

ghostsquad avatar Dec 25 '22 06:12 ghostsquad

I had the same issue and updating my windows solved it.

markcng avatar Apr 02 '23 20:04 markcng

thanks for letting me know

Please note: message attached

From: markcng @.> To: AUTOMATIC1111/stable-diffusion-webui @.> Cc: z46rt @.>, Comment @.> Subject: Re: [AUTOMATIC1111/stable-diffusion-webui] Torch is not able to use GPU (Issue #783) Date: Sun, 02 Apr 2023 13:32:14 -0700

z46rt avatar Apr 02 '23 23:04 z46rt

I have this same issue but I'm running on Ubuntu. I have limited familiarity with Python. The instructions given by penguin above don't appear to be applicable to me.

jbnv avatar Apr 15 '23 00:04 jbnv

I found a fix here, hope it helps you guys: https://www.reddit.com/r/StableDiffusion/comments/z6nkh0/torch_is_not_able_to_use_gpu/

CreativeBytes avatar Apr 26 '23 21:04 CreativeBytes

I believe AUTOMATIC1111 fixed the issue minutes after you posted: 45c46f4 Did the issue fix itself - after you did a git pull? kiancn

You clearly don't know how to read since it's 2024 and the issue is still affecting people. Also OP was running AMD hardware which doesn't have CUDA, you completely missed that because you're an arrogant programmer, I figure someone who got into the Arctic Vault submission too I'd figure you'd have more common sense. Please unless you are the lead developer of a project or involved some-way do not comment again on other peoples issues to avoid future misinformation being spread from your fingertips.

For those who do find this and have AMD please try using stable diffusion with DirectML (it is installed the same way):

https://github.com/lshqqytiger/stable-diffusion-webui-directml

For all others using nvidia (or the AMD DirectML) I would try using the above mentioned solution:

-- From CMD prompt change your working directory to the venv/Scripts folder of your stable diffusion --

• pip install fastapi==0.90.1 • python.exe -m pip install --upgrade pip

I also learned the newer releases DirectML is a fallback option not the default you may have to open the batch file and add '--skip-torch-cuda-test' to 'COMMANDLINE_ARGS', but I'm not an expert on this, nor will I claim to be I'm here here offering what I found as I was having the same issues and it helped resolve mine.

Good luck to anyone trying this out, and much appreciate to those who helped me actually figure the issue out unlike kiancn who decide it would be better to troll this issue ticket instead of giving any constructive feedback, I also would like to thank AUTOMATIC1111 and others for the work they did making these great scripts!

(While I understand this issue was closed, people in the future WILL google and find this post, all of you must keep that in mind that's why old threads get revived occasionally. I'm not just necroing, I'm adding legitimate information I found and I don't like posting links to sources because eventually sources get taken down and the information will be lost)

Denveous avatar Jan 06 '24 14:01 Denveous