stable-diffusion-webui
                                
                                 stable-diffusion-webui copied to clipboard
                                
                                    stable-diffusion-webui copied to clipboard
                            
                            
                            
                        docker container support
This PR adds docker container support.
Having a docker container with the UI might be useful e.g. for deployment testing or experimenting with untrusted models or scripts. Additional utilities can make adding an SSL suport with automatic certificate creation a breez too (hints are provided in the README.md file).
Due to technical limitations only Nvidia acceleration is supported via Nvidia docker runtime atm.
Additionally there's GitHub action that automatically builds and pushes this to docker hub (docker.io registry) under: https://hub.docker.com/r/emsi/stable-diffusion-webuiKeep in mind though that aforementioned image is not meant to be just docker pulled and docker run :)
You have to clone https://github.com/emsi/stable-diffusion-webui first, enter the stable-diffusion-webui/docker directory and only then run:
- docker compose pull and then
- ./run.sh.
This is necessary as this container uses host volumes binding the path to stable-diffusion-webui directory into container. It's mean for interoperability: you can update the image and all your local changes should remain intact, also you can update the local source with git pull and (unless there are some breaking changes) there's no need update the container image.
Just use our fork and you need no codebase adaptations: https://github.com/P2Enjoy/stable-diffusion-docker @AUTOMATIC1111 @emsi
Just use our fork and you need no codebase adaptations: https://github.com/P2Enjoy/stable-diffusion-docker @AUTOMATIC1111 @emsi
There's no codebase changes. The beauty of it is that it's the official repo without any modifications just Dockerfile and the docker-compose.yml for ease of use.
Thanks for this!
On first glance, I see two typos.
Dockerfile: CMD ["python", "launch.py", "--listen"] should be CMD ["python3", "launch.py", "--listen"]
docker/run.sh: if [ "$(which dokcer-compose)" ]; then should be if [ "$(which docker-compose)" ]; then
Good, but I suggest running prepare_environment method of launch.py before the line of CMD, because of it can avoid the need to download requirements for the first time running the docker image.
Thanks for this!
On first glance, I see two typos.
Dockerfile:CMD ["python", "launch.py", "--listen"]should beCMD ["python3", "launch.py", "--listen"]docker/run.sh:if [ "$(which dokcer-compose)" ]; thenshould beif [ "$(which docker-compose)" ]; then
Thanks. Nice catch. Fixed in #4426b though it's perfectly fine to call python as the container has only python3 installed as a system-wide python interpreter.
Good, but I suggest running prepare_environment method of launch.py before the line of CMD, because of it can avoid the need to download requirements for the first time running the docker image.
I was considering that but there are some issues:
- Docker images should remain small and contain no data by design.
- Models come from external sources and are not the part of stable-diffusion-webuihence those files are not distributed withstable-diffusion-webui. Putting them inside container might suggest otherwise.
- User might modify command, for example to addxfromersand thus the set of downloaded requirements will change.
To alleviate that the models directory is defined as VOLUME inside container and also mounted to the model directory in the sources folder so downloading is necessary only once just like it is with usage without docker.
Additionally if the models are already downloaded in the appropriate folder then everything is downloaded only once.
@eliassama @emsi Our build is data agnostic, the build only includes the runtime and all configuration, extension or model ever downloaded is available on the external folder /data
@eliassama @emsi Our build is data agnostic, the build only includes the runtime and all configuration, extension or model ever downloaded is available on the external folder /data
That's exactly how it is implemented.
The problem is I will not make any dockers, I will not be able to maintain this, and even if you will, you'd have to make a PR for me and wait for my approval every time, and I won't be able to review your changes anyway because I don't do docker.
My solution: make it an extension. I realize there can be a problem that someone who wants to make a docker possibly won't want to run the UI on his local machine to go into extensions tab and install the extension from there, but a person who wants to make a docker should have technical competence to clone the extension from git himself. And extension can still exist and be added to the index for visibility.
The problem is I will not make any dockers, I will not be able to maintain this, and even if you will, you'd have to make a PR for me and wait for my approval every time, and I won't be able to review your changes anyway because I don't do docker.
My solution: make it an extension. I realize there can be a problem that someone who wants to make a docker possibly won't want to run the UI on his local machine to go into extensions tab and install the extension from there, but a person who wants to make a docker should have technical competence to clone the extension from git himself. And extension can still exist and be added to the index for visibility.
I understand your point but having docker as a plugin makes no sense. The whole point about container is to secure against running untrusted code like plug-ins or models from the internet (which are de facto code). This way I can test some random stuff without fear. Also docker is very useful for testing, so the container should come before running the UI.
There's another idea that comes to my mind: perhaps you could point to my repo and repo mentioned by @eliassama in the documentation as unofficial docker image sources? I'll set up my repo with official stable-diffusion-webui as submodule so it always remains up to date. Other than that there won't be much updates to the docker as it just wraps the official code into container.
If you're OK with that I'll set up a dedicated repo.
I used your Dockerfile to build an image, but it failed to run with an error message. What could be the issue?
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
I used your Dockerfile to build an image, but it failed to run with an error message. What could be the issue?
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
It's a limitation of docker as it does not honor the runtime argument during build.
The simplest workaround, when building on your local computer is to use nvidia runtime as default runtime. To do so edit your /etc/docker/daemon.json ad add "default-runtime": "nvidia" so it looks something like this:
{
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"
}
If you don't have the Nvidia runtime you should install it first: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installation-guide
There's not point in using this docker without GPU and the check is there to make sure you are aware of the potential issue.
I used your Dockerfile to build an image, but it failed to run with an error message. What could be the issue? AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
It's a limitation of docker as it does not honor the
runtimeargument during build. The simplest workaround, when building on your local computer is to use nvidia runtime as default runtime. To do so edit your/etc/docker/daemon.jsonad add"default-runtime": "nvidia"so it looks something like this:{ "runtimes": { "nvidia": { "path": "nvidia-container-runtime", "runtimeArgs": [] } }, "default-runtime": "nvidia" }If you don't have the Nvidia runtime you should install it first: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installation-guide
There's not point in using this docker without GPU and the check is there to make sure you are aware of the potential issue.
I thought that adding the "--skip-torch-cuda-test" running parameter would make it work in CPU mode, but I wanted it to use the GPU, so I removed this parameter. I am not very familiar with this matter, so I would like to ask if adding the "--skip-torch-cuda-test" running parameter will make the container work in GPU mode after the image is built and the container is run. Thank you for your answer, I really appreciate it.
This is pretty awesome stuff, thanks! Have you tried running it on ECS?
Nope but it should work as long as there is Nvidia runtime installed and drivers on the host. I did try it though on GCP without issues :)
If you don't have the Nvidia runtime you should install it first: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installation-guide There's not point in using this docker without GPU and the check is there to make sure you are aware of the potential issue.
I thought that adding the "--skip-torch-cuda-test" running parameter would make it work in CPU mode, but I wanted it to use the GPU, so I removed this parameter. I am not very familiar with this matter, so I would like to ask if adding the "--skip-torch-cuda-test" running parameter will make the container work in GPU mode after the image is built and the container is run. Thank you for your answer, I really appreciate it.
Actually the --skip-torch-cuda-test is used during build in Dockerfile. Adding this argument will by no means make the container use GPU (it would use GPU with this argument only if it would so without it). This argument is just a test meant to let you know that your GPU is not visible to the application.
If you're not familiar with docker and installing nVidia runtime the you should probably try running it first without docker.
i tested this PR and it works 👍
I tested this as well. It launched successfully in Portainer after I addressed an issue with using a share for my storage.
Would any warm-heart gentleman or lady upload its image to a pub hub?
Would any warm-heart gentleman or lady upload its image to a pub hub?
I dropped a copy of my local image on https://hub.docker.com/r/ramblingcoder/stable-diffusion-webui-unofficial. Tried to make it very clear it was the unofficial image. Probably won't update it much. Hopefully this one gets merged in. It should have the latest as of commit 89f9faa.
Would any warm-heart gentleman or lady upload its image to a pub hub?
Sure. I was off the grid for past couple of days. I'll set up GitHub Actions to build the image and publish to GitHub's registry shortly.
I've managed to configure GitHub action to automatically build and push to docker hub (docker.io registry) under: https://hub.docker.com/r/emsi/stable-diffusion-webui
Keep in mind though that this image is not meant to be just docker pulled and docker runned. :)
You to clone https://github.com/emsi/stable-diffusion-webui first, enter the stable-diffusion-webui/docker directory and only then run:
- docker compose pulland then
- ./run.sh.
This is necessary as this container uses host volumes binding the path to stable-diffusion-webui directory into container. It's mean for interoperability: you can update the image and all your local changes should remain intact, also you can update the local source with git pull and (unless there are some breaking changes) there's no need update the container image.
docker hub images are updated to 1.3.0 now
docker hub images are updated to 1.4.0 and release_candidate
Has anyone tried this with podman/rootless? I'll probably give it a go if not.
Looking at the PR, I have a couple questions--
- Saw this:
Beware though that inside container the UI is run as root. This has the implication that files written to volumes mounted to local path are owned by root!
Why is the UI run as root rather than as a user such as "a1111" or whatever? Would executable files set with chmod u+s also have the suid bit set on the host? ..  (Some other security-type questions relating to volumes are below)
I'm more of a Podman person used to running things rootless, so I don't know if this would work, but couldn't you add to Dockerfile something like:
RUN useradd -u 1000 -U --create-home -r a1111 && echo a1111:a1111 | chpasswd && usermod -aG wheel a1111
You should then have /home/a1111/ at your disposal to add stuff or run stuff out of or whatever.  It could be mounted to the host or whatever.
(You could also apt install sudo if you wanted to enable root, with or without a password)
You'd probably want to run as a regular user instead of root, so in the docker-compose.yml you'd add after line 16 or whatever:
     user: "1000:1000"
- Do these lines do what I think they do?
    volumes:
      - ..:/stable-diffusion-webui
      - home:/root
That is-- wouldn't the user's home directory be bind-mounted to the container's /root (the default user in the container)?  If so, doesn't this mean that if a rogue extension decides to do bad things, it would actually make changes to the host user's directory (and with root privs if I understand the warning above?)  So an evil automatic1111 extension running in a container might be able to, say, read the host user's private keys in ~/.ssh or make changes to their ~/.bashrc or look at files in ~/Documents?
Forgive me if I'm misunderstanding how this works (you want it to mount ${HOME} or a subdirectory called "home"?), but in any event if this is only about the ~/.cache directory, maybe ONLY that directory should be bind mounted?  Or even better, since you don't want to give automatic1111 access to other cached files in .cache, maybe it should be bind-mounted to a dedicated volume/directory that isolates it completely from the rest of the host's files?
- Typo here. "precc" should be "press", and you probably want a comma after "logs".
Thanks for the consideration and patience with my questions (I've used Podman a lot more than Docker so only used to the rootless experience, have very limited docker-compose.yml experience, and have run automatic1111 exactly zero times heh).  I'll wait to hear back, but anyway would love to see this running via Podman too.
RUN useradd -u 1000 -U --create-home -r a1111 && echo a1111:a1111 | chpasswd && usermod -aG wheel a1111
This would cause more problems. You would end up in mixed host ownership and couldn't share files between host and container. You container wouldn't be able to overwrite files created on host, etc. A bad experience since it might be very hard to debug such issues. Having the container run as root solves that. It's also a common practice to run containers as root. It's safe as long as you know what you are doing.
Please note that there is no good way to share the user id between the host and container. There are some solutions but none is perfect. In my own containers that I really do want to run as regular user I'm using the approach from my ML container: https://github.com/emsi/docker-ML/blob/master/cuda/run_as_user.sh, but I'm not comfortable to push that approach to 3rd party projects (it requires another compose service to be run briefly to ensure proper ownership and permissions).
2. Do these lines do what I think they do?
No, hey don't. - home:/root mounts a named volume home to maintain persistence of data saved in home directory, mainly ~/.cache, ~/.config and possibly others. It's not bind mount of user's ${HOME}.
3. Typo [here.](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/7946/files#diff-da8fcbe728a9172b578e5d754f8e2df214c658c4321f610e63dd68bea828ab49R36) "precc" should be "press", and you probably want a comma after "logs".
Thanks.
I don't know podman well but before you start make sure to use Nvidia docker runtime as runtime instead of runc.
RUN useradd -u 1000 -U --create-home -r a1111 && echo a1111:a1111 | chpasswd && usermod -aG wheel a1111This would cause more problems. You would end up in mixed host ownership and couldn't share files between host and container. You container wouldn't be able to overwrite files created on host, etc. A bad experience since it might be very hard to debug such issues. Having the container run as root solves that. It's also a common practice to run containers as root. It's safe as long as you know what you are doing.
Couldn't you have the UID/GID in the container be identical to the running user on the host? (see below) Having the container writing files as root by default seems potentially dangerous, but like I said I've used podman near-exclusively so I'm not sure what the standard practice is.
Please note that there is no good way to share the user id between the host and container. There are some solutions but none is perfect. In my own containers that I really do want to run as regular user I'm using the approach from my ML container: https://github.com/emsi/docker-ML/blob/master/cuda/run_as_user.sh, but I'm not comfortable to push that approach to 3rd party projects (it requires another compose service to be run briefly to ensure proper ownership and permissions).
Hmm okay.
No, hey don't.
- home:/rootmounts a named volumehometo maintain persistence of data saved in home directory, mainly~/.cache,~/.configand possibly others. It's not bind mount of user's${HOME}.
Oh okay.. that's a relief thanks! I suspected maybe you'd need to explicitly say "${HOME}" but then some stupid AI was hallucinating apparently, insisting that, no, "home" was a Docker keyword that pointed to the user's actual directory. I couldn't find anything about this in the Docker docs, but I figured I'd ask. Might it be clearer to name the volume "a1111_home" on the volume side for clarity (and to keep the namespace clear in case there are other similarly named volumes)?
I don't know podman well but before you start make sure to use Nvidia docker runtime as runtime instead of runc.
Yep, thanks! I use podman quite a bit with cuda stuff- just never encountered the need to run normal applications as root within the container.  (On Podman, even if the user is "root" in the container it's still running as the host user, not root on the host) The most I've ever needed is to add --annotation run.oci.keep_original_groups=1  --userns=keep-id.   The UID/GID issue is easily handled, with the ownership being the local user even though the GID will vary for the container user-- once permissions are set up, everything can be read and written to from inside OR outside the container.   If I want to give it more access I can add --privileged.  See here for a project I've worked on that does this w/the DaVinci Resolve editor.
Docker has a rootless mode too. Would everything still work?
Thanks again.
Quick update: I just saw this:
Please note that there is no good way to share the user id between the host and container.
At first I read it as "there's no good way to share files between the host and container becuase of user id issues..), but did you mean literally that you can't pass the information from the host to the container?  If so, couldn't you just run with --user "${USERNAME}:${USERNAME}"?  When setting up the image, pass through the UID/GID as env variables to the Dockerfile then set the files accordingly?
Oh okay.. that's a relief thanks! I suspected maybe you'd need to explicitly say "${HOME}" but then some stupid AI was hallucinating apparently, insisting that, no, "home" was a Docker keyword that pointed to the user's actual directory. I couldn't find anything about this in the Docker docs, but I figured I'd ask. Might it be clearer to name the volume "a1111_home" on the volume side for clarity (and to keep the namespace clear in case there are other similarly named volumes)?
That's not how you name volumes. Please check docker-compose.yaml documentation. The volume is declared at the end of the file:
volumes:
  home:
The actual volume name as used by docker is:
${COMPOSE_PROJECT_NAME}_home  in this case.
Couldn't you have the UID/GID in the container be identical to the running user on the host? (see below) Having the container writing files as root by default seems potentially dangerous, but like I said I've used podman near-exclusively so I'm not sure what the standard practice is. (...)
Yep, thanks! I use podman quite a bit with cuda stuff- just never encountered the need to run normal applications as root within the container. (On Podman, even if the user is "root" in the container it's still running as the host user, not root on the host)
In docker root within container isn't root on host either. Please read about capabilities: https://dockerlabs.collabnix.com/advanced/security/capabilities/
At first I read it as "there's no good way to share files between the host and container becuase of user id issues..), but did you mean literally that you can't pass the information from the host to the container? If so, couldn't you just run with
--user "${USERNAME}:${USERNAME}"? When setting up the image, pass through the UID/GID as env variables to theDockerfilethen set the files accordingly?
Please check https://github.com/emsi/docker-ML/blob/master/cuda/run_as_user.sh that I've linked before. If you want to run container as the host user id use just that (though remember to add another service as seen the aforementioned repo's docker-compose.yml). I'm just not comfortable with pushing this to other repo.
The actual volume name as used by docker is:
${COMPOSE_PROJECT_NAME}_homein this case.
Bard is such a liar :) Thanks for the above.
From "capabilities":
By default, Docker drops all capabilities except [those needed] using a whitelist approach.
I guess running rootless/Podman I never had to consider these issues, etc. but happy to read my concerns aren't warranted :) I'll probably just make a few changes locally so that it will run in Podman (as a user by default- don't think I'd need run_as_user.sh) - if they'd be of any use, let me know and I can send them over.
Last question, going back to this:
Beware though that inside container the UI is run as root. This has the implication that files written to volumes mounted to local path are owned by root!
This is what made me naively think that there could be a problem. Might anything need to be changed to make it clearer?
Cheers!
docker is great.