stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: (AMD Rx580x) stderr: "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
cuda in pytorch doesn't work
Steps to reproduce the problem
- install the program
What should have happened?
it should have worked
Commit where the problem happens
ea9bd9fc7409109adcd61b897abc2c8881161256
What platforms do you use to access the UI ?
Linux
What browsers do you use to access the UI ?
Mozilla Firefox, Google Chrome, Brave
Command Line Arguments
❯ git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
python -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip wheel
# It's possible that you don't need "--precision full", dropping "--no-half" however crashes my drivers
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' python launch.py --precision full --no-half
List of extensions
No
Console logs
fatal: destination path 'stable-diffusion-webui' already exists and is not an empty directory.
Requirement already satisfied: pip in ./venv/lib/python3.10/site-packages (23.0)
Requirement already satisfied: wheel in ./venv/lib/python3.10/site-packages (0.38.4)
Python 3.10.9 (main, Dec 19 2022, 17:35:49) [GCC 12.2.0]
Commit hash: ea9bd9fc7409109adcd61b897abc2c8881161256
Traceback (most recent call last):
File "/home/roza/stable-diffusion-webui/launch.py", line 360, in <module>
prepare_environment()
File "/home/roza/stable-diffusion-webui/launch.py", line 272, in prepare_environment
run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
File "/home/roza/stable-diffusion-webui/launch.py", line 129, in run_python
return run(f'"{python}" -c "{code}"', desc, errdesc)
File "/home/roza/stable-diffusion-webui/launch.py", line 105, in run
raise RuntimeError(message)
RuntimeError: Error running command.
Command: "/home/roza/stable-diffusion-webui/venv/bin/python" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: -6
stdout: <empty>
stderr: "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
Additional information
Gpu: Amd Rx580x i've tried to find how make pytorch work but i haven't find I know there are people who have achieved to make it work but i haven't understand how they made it
RX580 (non-x) here, same runtime error when running ./webui.sh
according to the AMD installation guide
Try adding --skip-torch-cuda-test
to launch.py
arguments
Same error with RX590, and --skip-torch-cuda-test
also not working.
❯ TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' python launch.py --skip-torch-cuda-test
Python 3.10.9 (main, Dec 19 2022, 17:35:49) [GCC 12.2.0]
Commit hash: 3715ece0adce7bf7c5e9c5ab3710b2fdc3848f39
Installing requirements for Web UI
Launching Web UI with arguments:
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
[1] 22967 IOT instruction (core dumped) TORCH_COMMAND= python launch.py --skip-torch-cuda-test
Try using export HSA_OVERRIDE_GFX_VERSION=10.3.0
beforehand. Some AMD GPUs refuse to use ROCM even if they actually can and this fixes that. Not sure if this works on RX500.
On RX590 that only gets me a segmentation fault. If I don't add that override it faults a little later instead.
On RX590 that only gets me a segmentation fault. If I don't add that override it faults a little later instead.
Same, but I managed to get sdwebui running on Windows. Surprisingly it's easier.
I have the same issue when starting:
source venv/bin/activate
export HSA_OVERRIDE_GFX_VERSION=10.3.0
python launch.py --skip-torch-cuda-test
Output Python 3.10.9 (main, Dec 08 2022, 14:49:06) [GCC] Commit hash: 0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8 Installing requirements for Web UI Launching Web UI with arguments: No module 'xformers'. Proceeding without it. Loading weights [6ce0161689] from /home/user/programs/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors Creating model from config: /home/user/programs/stable-diffusion-webui/configs/v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. ./run.sh: line 7: 7440 Segmentation fault (core dumped) python launch.py --skip-torch-cuda-test
System: AMD Radeon RX580 Linux linux 6.2.1-1-default #1 SMP PREEMPT_DYNAMIC ID="opensuse-tumbleweed" ID_LIKE="opensuse suse" VERSION_ID="20230302" PRETTY_NAME="openSUSE Tumbleweed"
I have installed all rocm-opencl etc. packages from http://repo.radeon.com/rocm/zyp/zypper/ but it is probably not enough?
Rocm5.1.1. is no longer available by that link. You could get the previous version here previous versions of PyTorch.
Try replacing pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1
part of TORCH_COMMAND with pip install torch==1.13.0+rocm5.2 torchvision==0.14.0+rocm5.2 --extra-index-url https://download.pytorch.org/whl/rocm5.2
I have tried several Rocm versions. Also installed the packages from: https://repo.radeon.com/amdgpu/5.4.3/sle/15.4/proprietary/x86_64/ https://repo.radeon.com/amdgpu/5.4.3/sle/$amdgpudistro/main/x86_64
with no success. So I have created a run script with this options:
python3 -m venv venv
source venv/bin/activate
# first time install all pip requirements
pip install --upgrade pip wheel xformers
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2
#pip install -r requirements.txt
#pip install torch==1.13.0+rocm5.2 torchvision==0.14.0+rocm5.2 --extra-index-url https://download.pytorch.org/whl/rocm5.2
# according https://llvm.org/docs/AMDGPUUsage.html#processors
export PYTORCH_ROCM_ARCH=gfx803
export HSA_OVERRIDE_GFX_VERSION=10.3.0
export ROC_ENABLE_PRE_VEGA=1
export ACCELERATE="True"
#export COMMANDLINE_ARGS="--medvram --no-half --listen --skip-torch-cuda-test"
export COMMANDLINE_ARGS="--medvram --no-half --listen --skip-torch-cuda-test"
#TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' python3 launch.py
python3 launch.py
The screen flashes very quick and I get this output:
Launching Web UI with arguments: --medvram --no-half --listen --skip-torch-cuda-test
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 1.13.0+rocm5.2.
The program is tested to work with torch 1.13.1.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
Loading weights [6ce0161689] from /home/user/programs/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /home/user/programs/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
./run.sh: line 25: 8598 Segmentation fault (core dumped) python3 launch.py
With the rx580 you are out of luck. It was dropped from rocm 5.x , hence from pytorch . So there is a way to installed patched version of both rocm and pytorch, but it's unreliable.
I assume the same applies to rx590? That stinks.
On Sat., May 6, 2023, 09:48 takov751, @.***> wrote:
With the rx580 you are out of luck. It was dropped from rocm 5.x , hence from pytorch . So there is a way to installed patched version of both rocm and pytorch, but it's unreliable.
— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7628#issuecomment-1537146352, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJBOMIIWCMWA5WTXCRNCVTXEZJEVANCNFSM6AAAAAAUUPGOUA . You are receiving this because you commented.Message ID: @.***>
Yep it does , same architecture . We are a very big sad bunch of peeps. Today i was able to run stable difussion on my rx580 oc , however after a few try, error started appearing and it allways generated the same static noise every time. I couldn't figure out what changed.
Is this true can we revert to an older version of pytorch to make it work? is there a discord or public chat that is keeping track of these updates as they happen?
this error occurs on a laptop when you have dual graphics. friggen sad. poorly documented.