stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: "AssertionError: Torch is not able to use GPU;"

Open thelolz385 opened this issue 2 years ago • 6 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

the webui have been working fine on my PC previously i have Asus ZephyrusG14 AMD Ryzen 9 5900HS 16 GB RAM RTX 3060m (6GB) and also AMD Radeon Graphics i was having some issues couldn't find any problem so i did a clean install many times but everytime im getting this error "AssertionError: Torch is not able to use GPU;" 201548038-a2dada39-db70-41b9-b2d1-333de277a3b2

Steps to reproduce the problem

nothing just installs the webui and get this error from the start

What should have happened?

idk when ever i have installed it previously it have worked just fine this time its not detecting the GPU at all.

Commit where the problem happens

98947d173e3f1667eba29c904f681047dea9de90

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No response

Additional information, context and logs

i have tried to switch between browsers. also tried to create a new user and did clean install there but get this error everytime.

thelolz385 avatar Nov 14 '22 04:11 thelolz385

You can install the program on AMD GPUs according to this description (I don't have an AMD card, I switched to nVIDIA card because of the AI)

mykeehu avatar Nov 14 '22 14:11 mykeehu

You can install the program on AMD GPUs according to this description (I don't have an AMD card, I switched to nVIDIA card because of the AI)

but i do have nVIDIA GPU

AMD Ryzen 9 5900HS 16 GB RAM RTX 3060m (6GB) and also AMD Radeon Graphics

thelolz385 avatar Nov 14 '22 22:11 thelolz385

There was a problem with recent NVIDIA driver releases that was causing this error for some people, myself included. I fixed it by upgrading my drivers to the latest 522.30 Studio version. Someone else had suggested rolling back to 516.30. YMMV.

jwax33 avatar Nov 15 '22 00:11 jwax33

I'm following the guide on AMD GPUs and it's giving the same error as OP. Where is the TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' python launch.py --precision full --no-half command supposed to go? Tried to export it on the webui-user script but still have to use --skip-torch-cuda-test and the Found no NVIDIA driver on your system exception still appears.

vncastanheira avatar Nov 15 '22 20:11 vncastanheira

I just looked a littttttle at the code here. Its seems there is a modules/devices.py script and that seems to be responsible for checking the GPU/hardware, and running accordingly. It doesnt seem to have any logic for AMD(at least not for onnx). Also I briefly looked at text2img.py and a few others, and I dont see any sort of configuration to use onnx at all. I use linux ALOT, but I havent used it at home in a while to try and separate work from home. So my home PC is mostly for gaming and such, and runs windows. So, maybe the current code works on linux, as I think linux doesnt use onnx with AMD?

To get it working on windows IS possible. But the issue is that you have to convert any models to onnx first. If you are using a ckpt file, the you have to first convert it to a "diffusers" style Model, and then to onnx.

Then there is an issue with intermixing torch and numpy between the two. Annnnd, the SDpipeline module/class that you load for the pipe is different as well.

So, this can be done, but someone is going to have to add in logic for the the following: Is the system an AMD windows system? Does it have the modified/custom compiled Microsoft DirectML onnx runtime installed?

If yes, then set a flag that we are onnx. Check the models folder for onnx models. Load them. If any diffusers or "original" format models are found, update them, then convert to onnx.

When executing a pipeline, make sure to load a onnx compatible pipeline. Use an existing one or fork it, or modify the existing one to take an onnx boolean flag or something.

Now, I have only been learning about all this today, so I think also you will have to convert the vae and make sure the schedulers available are compatible/working.

There are also probably some things I missed, but, I think you can see, its quite a lot of work, and only a small subset of is have AMD gpu.

digitalirony avatar Nov 17 '22 04:11 digitalirony

I was able to get the install and everything to work with

AMD Game Ready Driver 527.56 Cuda 12 Cuda compilation tools, release 11.5, V11.5.119

By adding into my own bash profile (or using the export command)

export TORCH_COMMAND="pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117"

Before I ran

bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)

risingsunomi avatar Dec 13 '22 07:12 risingsunomi