stable-diffusion-webui-docker icon indicating copy to clipboard operation
stable-diffusion-webui-docker copied to clipboard

AMD GPUs

Open flying-sheep opened this issue 2 years ago • 25 comments

Describe the bug

I have a AMD Radeon RX 6800 XT. Stable diffusion supports this GPU.

After building this image, it fails to run:

 => => naming to docker.io/library/webui-docker-automatic1111                                                                                                                                                0.0s
[+] Running 1/1
 ⠿ Container webui-docker-automatic1111-1  Created                                                                                                                                                           0.2s
Attaching to webui-docker-automatic1111-1
Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]

Steps to Reproduce

  1. Run docker compose --profile auto up --build (after download)

Hardware / Software:

  • OS: Arch Linux (up-to-date)
  • GPU: AMD Radeon RX 6800 XT
  • Version 1.0.1

flying-sheep avatar Sep 15 '22 08:09 flying-sheep

@flying-sheep Unfortunately, AMD GPUs are not currently supported. I know it that the auto fork can run on AMD GPU https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs, but I don't have any to test it.

If you would like to contribute, that would be great!

AbdBarho avatar Sep 15 '22 08:09 AbdBarho

This docker-compose file seems to support passing AMD GPUs to docker: https://github.com/compscidr/lolminer-docker/blob/main/docker-compose.yml

But I don’t know what’s necessary software wise. Making just the device change, I get:

webui-docker-automatic1111-1  | txt2img: 
webui-docker-automatic1111-1  | /opt/conda/lib/python3.8/site-packages/torch/autocast_mode.py:162: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling

flying-sheep avatar Sep 15 '22 08:09 flying-sheep

Ah, seems like PyTorch needs to be installed via pip to get ROCm support. but it’s unclear to me if that means that it somehow detects the GPU while building, because if the built PyTorch package is capable of being run by both CUDA and ROCm, there’s no reason to not distribute that via anaconda, right?

flying-sheep avatar Sep 15 '22 08:09 flying-sheep

You are asking difficult questions my friend.

AbdBarho avatar Sep 15 '22 08:09 AbdBarho

Welp, apparently nvidia has pressed enough people into their monopoly that I’m the first one :anguished:

flying-sheep avatar Sep 15 '22 09:09 flying-sheep

Have a look at : https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs#running-inside-docker

You need to passthrough the GPU into the docker container for ROCM to use it.

JoeMojoJones avatar Oct 01 '22 08:10 JoeMojoJones

@JoeMojoJones thank you, this link is helpful for reference.

The problem is I have no AMD GPU so I can't even test if the code works.

AbdBarho avatar Oct 01 '22 08:10 AbdBarho

@AbdBarho I have Pytorch installed via pip on my machine, what do I need to modify in the docker file to get AMD working? Maybe if it works I can do a PR for this?

GBora avatar Nov 04 '22 22:11 GBora

@GBora that's great! unfortunately, I have no experience of working with AMD GPUs and docker for deep learning. Maybe this link above could help guide you.

I would guess the changes would probably be related to the base image and the deploy config in docker compose, but this is just a guess.

AbdBarho avatar Nov 05 '22 17:11 AbdBarho

lem is I have no AM

Please perform changes to the docker-compose file, and then let me know, I'll pull changes and try to run and answer you if everything is correct :) At this moment invoke doesn't returns the issue in the disscussion. I have RX 6600, will try to run it.

NazarYermolenko avatar Feb 03 '23 18:02 NazarYermolenko

I got it working pretty easily for AMD

https://github.com/AbdBarho/stable-diffusion-webui-docker/pull/362/files

mtthw-meyer avatar Mar 10 '23 19:03 mtthw-meyer

Awesome, your branch works nicely indeed!

Finally a way to use the potential of GPU lol.

flying-sheep avatar Mar 11 '23 14:03 flying-sheep

hello, I have this error although I have Tesla T4 and ubuntu 22.04. can somebody help me pls. I thought using Docker might make my life easier :'c

svupper avatar Mar 30 '23 16:03 svupper

Ok :) I just needed to execute this :

curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey |
sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list |
sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list sudo apt-get update sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker

hello, I have this error although I have Tesla T4 and ubuntu 22.04. can somebody help me pls. I thought using Docker might make my life easier :'c

svupper avatar Mar 30 '23 17:03 svupper

@flying-sheep Was it merged to master?

f1am3d avatar Aug 28 '23 10:08 f1am3d

No, doesn’t look like it: #362

I just checked it out locally and ran it.

flying-sheep avatar Aug 28 '23 14:08 flying-sheep

@mtthw-meyer Does your fork still work? I'm trying to get that up but it complains "Found no NVIDIA driver on your system". This is usually bypassed by passing "--skip-torch-cuda-test" to launch.py but I don't see where launch.py gets used.

Nevermind, I got it working. I had to update some things in the dockerfile for torch, install some additional packages, edit the requirements file to get auto working. Still trying to sort out invokeai

tgm4883 avatar Sep 16 '23 10:09 tgm4883

@tgm4883 could you please open a PR or share your modifications to fix the container?

Coniface avatar Sep 18 '23 08:09 Coniface

@Coniface

I'll try to share that when I get home tonight. It's some fixes on the AMD fork and I know so little about SD that it might have other issues but it runs and works with the plugins I use.

tgm4883 avatar Sep 18 '23 19:09 tgm4883

I'm attaching the git diff I made. I also have a build script that builds and tags the image. I've only gotten the automatic1111 interface to work. Let me know if you have any questions.

TIMESTAMP=$(date +%Y%m%d.%H%M%S)
export BUILD_DATE=$TIMESTAMP
docker rm -f test-sd-auto-1 &>/dev/null || :
docker image rm -f sd:auto-amd-latest &>/dev/null || :
docker compose build auto-amd
docker tag sd:auto-amd-$BUILD_DATE sd:auto-amd-latest

Updated the file I uploaded to clean it up a little bit 20230918.txt

tgm4883 avatar Sep 19 '23 03:09 tgm4883

As of writing, I found that the sd-webui documentations are out-of-date for AMD GPUs on Linux (I'm currently using Fedora 39 and want to run it on a AMD 6900XT): https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

It also skips a lot of details on the necessary prerequisites for setting up rocm/hip related dependencies. I think the easiest way is to use the rocm/pytorch docker image after all though. Even the rocm documentation suggests it as one of the first options for setup. One sticking point is that there are a lot of factors affecting whether PyTorch gets installed correctly to detect and use your AMD GPU. I'm currently working on a Docker image that could specifically deploy the stable-diffusion-webui via Docker on AMD GPU systems with one-click.

I'd be interested in seeing whether others are working on something similar or have thoughts on this!

justin13888 avatar Nov 28 '23 22:11 justin13888

As of writing, I found that the sd-webui documentations are out-of-date for AMD GPUs on Linux (I'm currently using Fedora 39 and want to run it on a AMD 6900XT): https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

It also skips a lot of details on the necessary prerequisites for setting up rocm/hip related dependencies. I think the easiest way is to use the rocm/pytorch docker image after all though. Even the rocm documentation suggests it as one of the first options for setup. One sticking point is that there are a lot of factors affecting whether PyTorch gets installed correctly to detect and use your AMD GPU. I'm currently working on a Docker image that could specifically deploy the stable-diffusion-webui via Docker on AMD GPU systems with one-click.

I'd be interested in seeing whether others are working on something similar or have thoughts on this!

Even though i also think the AMD docs are miserable out-of-date and i just can't understand why, you don't need to install any special rocm/hip system dependencies. They only thing needed is the special pytorch-rocm python package. pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6 PyTorch - get started locally

cloudishBenne avatar Dec 30 '23 07:12 cloudishBenne

Any news on that matter? I'm searching for a way to run webui on a 680m.

tristan-k avatar Apr 05 '24 18:04 tristan-k

As an update, I was able to run AUTOMATIC on Fedora 39 using rocm5.7.1 provided through repo and this version of torch and vision

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7

justin13888 avatar Apr 05 '24 21:04 justin13888

Any news on that matter? I'm searching for a way to run webui on a 680m.

I have a laptop with the same chip as well but never tried. You have to make sure your architecture is supported by referring to the compatibility matrix (e.g. https://rocm.docs.amd.com/en/docs-5.7.1/release/gpu_os_support.html)

I also found somebody commenting about this in rocm repo: https://github.com/ROCm/ROCm/discussions/2932#discussioncomment-8615032

justin13888 avatar Apr 05 '24 21:04 justin13888