stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Automatic Installation - webui-user.sh does not run (Linux Mint + 5700 XT)
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
I did all the things at the https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs and when i RUN IN TERMINAL the webui-user.sh pops up and closes itself after a ms. If i click RUN there comes no window.
If im running the webui.sh, a command window pops up and after one ms it closes itself, but webui.sh created a venv folder in my directory.
Steps to reproduce the problem
- Installed Linux Mint
- Installed Python 3.10.6
- installed Git
- git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
- Place stable diffusion checkpoint (model.ckpt) in the models/Stable-diffusion directory
- --precision full --no-half to COMMANDLINE_ARGS= in webui-user.sh (I have a 5700 XT)
- RUN webui-user.sh
What should have happened?
I think Firefox should pop up and shows me the UI.
Commit where the problem happens
6cff440
What platforms do you use to access UI ?
Linux
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
#!/bin/bash
#########################################################
# Uncomment and change the variables below to your need:#
#########################################################
# Install directory without trailing slash
#install_dir="/home/$(whoami)"
# Name of the subdirectory
#clone_dir="stable-diffusion-webui"
# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
#export COMMANDLINE_ARGS="--precision full --no half"
# python3 executable
#python_cmd="python3"
# git executable
#export GIT="git"
# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)
#venv_dir="venv"
# script to launch to start the app
#export LAUNCH_SCRIPT="launch.py"
# install command for torch
#export TORCH_COMMAND="pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113"
# Requirements file to use for stable-diffusion-webui
#export REQS_FILE="requirements_versions.txt"
# Fixed git repos
#export K_DIFFUSION_PACKAGE=""
#export GFPGAN_PACKAGE=""
# Fixed git commits
#export STABLE_DIFFUSION_COMMIT_HASH=""
#export TAMING_TRANSFORMERS_COMMIT_HASH=""
#export CODEFORMER_COMMIT_HASH=""
#export BLIP_COMMIT_HASH=""
# Uncomment to enable accelerated launch
#export ACCELERATE="True"
###########################################
Additional information, context and logs
No response
Heya, what happens if you open a terminal, go into the directory and open webui.sh that way? It should normally show you what happens. Also webui-user.sh is only for adding stuff, like you did. But you forgot to uncomment (delete the # in front of the export COMMANDLINE_ARGS) your stuff, so it wouldn't work to begin with.
you need to add export HSA_OVERRIDE_GFX_VERSION=10.3.0
You need to run webui.sh
not webui-user.sh
. Run it from the command line and post the output. First time you run it should download requirements which takes a while.
cd stable-diffusion-webui
./webui.sh
you need to add export HSA_OVERRIDE_GFX_VERSION=10.3.0
You don't need that any more on latest version, webui.sh
now does it automatically.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/645f4e7ef8c9d59deea7091a22373b2da2b780f2/webui.sh#L109
So, yesterday i did the natively way, it installed everything and started the overlay in Firefox. I get output pictures, everything fine. But it works slow... 20 Steps, 512x768 about 3 mins.
So i tried to run the webui.sh per command and yes, it starts. But...
################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye) ################################################################
################################################################ Running on tom user ################################################################
################################################################ Repo already cloned, using it as install directory ################################################################
################################################################ Create and activate python venv ################################################################
################################################################
Accelerating launch.py...
################################################################
The following values were not passed to accelerate launch
and had defaults used instead:
--num_processes
was set to a value of 1
--num_machines
was set to a value of 1
--mixed_precision
was set to a value of 'no'
To avoid this warning pass in values for each of the problematic parameters or run accelerate config
.
Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
Commit hash: 6cff4401824299a983c8e13424018efc347b4a2b
Installing requirements for Web UI
Launching Web UI with arguments: --precision full --no-half
/home/tom/Desktop/Stable Diffusion 2/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py:88: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.)
return torch._C._cuda_getDeviceCount() > 0
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice', memory monitor disabled
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [fcfaf106f2] from /home/tom/Desktop/Stable Diffusion 2/stable-diffusion-webui/models/Stable-diffusion/hereismymodel.cktp :)
Applying cross attention optimization (InvokeAI).
Textual inversion embeddings loaded(0):
Model loaded in 1.0s (0.3s create model, 0.6s load weights).
Running on local URL: http://127.0.0.1:7861
To create a public link, set share=True
in launch()
.
I'm not a good code reader and i inserted the --skip-torch-cuda-test everywhere i can, but i get always the code above. Should there not be an code which tells me "Ah okay, no cuda, but here an AMD" ...cause i dont know, does my automatic1111 now runs with cpu or the AMD. (if its running my amd - its slow and the google colab is f***** faster)
Using Ubuntu 22.10 with my 5700XT, also had some problems in the beginning. --skip-torch-cuda-test was not solving the problem, bcz it always ran on CPU for me. It is rly slow then.
Using version rocm5.2 and its working fine for me.
pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2
pip3 list | grep torch
shows you which version you have installed.
something like this:
torch 1.13.1+rocm5.2
torchvision 0.14.1+rocm5.2
was also missing the usergroups for the graphics on my user, bcz that rocm still didn't recognize my GPU. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7152#issuecomment-1402619240 was helping me.
sudo usermod -aG render YOURLINUXUSERNAME
sudo usermod -aG video YOURLINUXUSERNAME
and rebooted after it.. Then the webui.sh started without problems for me.
Only starting with --no-half --medvram
as arguments. Didn't need to change anything else.
Had it run before in docker container, while i couldn't get it to run natively.
With the image rocm/pytorch:rocm5.2_ubuntu20.04_py3.7_pytorch_1.11.0_navi21
which was starting the webui.sh directly there, without any problems.
Hope it helps a bit.
Using Ubuntu 22.10 with my 5700XT, also had some problems in the beginning. --skip-torch-cuda-test was not solving the problem, bcz it always ran on CPU for me. It is rly slow then.
Using version rocm5.2 and its working fine for me.
pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2
pip3 list | grep torch
shows you which version you have installed. something like this:torch 1.13.1+rocm5.2 torchvision 0.14.1+rocm5.2
was also missing the usergroups for the graphics on my user, bcz that rocm still didn't recognize my GPU. #7152 (comment) was helping me.
sudo usermod -aG render YOURLINUXUSERNAME sudo usermod -aG video YOURLINUXUSERNAME
and rebooted after it.. Then the webui.sh started without problems for me. Only starting with
--no-half --medvram
as arguments. Didn't need to change anything else.Had it run before in docker container, while i couldn't get it to run natively. With the image
rocm/pytorch:rocm5.2_ubuntu20.04_py3.7_pytorch_1.11.0_navi21
which was starting the webui.sh directly there, without any problems.Hope it helps a bit.
This worked for me on my RX 6800. GPU is being fully utilized and images are being generated at a much more respectable speed. Thank you for posting this.
Using Ubuntu 22.10 with my 5700XT, also had some problems in the beginning. --skip-torch-cuda-test was not solving the problem, bcz it always ran on CPU for me. It is rly slow then.
Using version rocm5.2 and its working fine for me. pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2
pip3 list | grep torch shows you which version you have installed. something like this:
torch 1.13.1+rocm5.2 torchvision 0.14.1+rocm5.2 was also missing the usergroups for the graphics on my user, bcz that rocm still didn't recognize my GPU. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7152#issuecomment-1402619240 was helping me.
sudo usermod -aG render YOURLINUXUSERNAME sudo usermod -aG video YOURLINUXUSERNAME and rebooted after it.. Then the webui.sh started without problems for me. Only starting with --no-half --medvram as arguments. Didn't need to change anything else.
Had it run before in docker container, while i couldn't get it to run natively. With the image rocm/pytorch:rocm5.2_ubuntu20.04_py3.7_pytorch_1.11.0_navi21 which was starting the webui.sh directly there, without any problems.
Hope it helps a bit.
Thank you, this worked for me :)
Using Ubuntu 22.10 with my 5700XT, also had some problems in the beginning. --skip-torch-cuda-test was not solving the problem, bcz it always ran on CPU for me. It is rly slow then.
Using version rocm5.2 and its working fine for me.
pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2
pip3 list | grep torch
shows you which version you have installed. something like this:torch 1.13.1+rocm5.2 torchvision 0.14.1+rocm5.2
was also missing the usergroups for the graphics on my user, bcz that rocm still didn't recognize my GPU. #7152 (comment) was helping me.
sudo usermod -aG render YOURLINUXUSERNAME sudo usermod -aG video YOURLINUXUSERNAME
and rebooted after it.. Then the webui.sh started without problems for me. Only starting with
--no-half --medvram
as arguments. Didn't need to change anything else.Had it run before in docker container, while i couldn't get it to run natively. With the image
rocm/pytorch:rocm5.2_ubuntu20.04_py3.7_pytorch_1.11.0_navi21
which was starting the webui.sh directly there, without any problems.Hope it helps a bit.
Holy hell. It worked! I'm using 22.10 and a 5700xt too with a docker. It's starts up slowly. But eventually it's speeds up! Thank you. I was stuck on it for days.