stable-diffusion-webui-amdgpu icon indicating copy to clipboard operation
stable-diffusion-webui-amdgpu copied to clipboard

[Bug]: Agent not found on 9070XT

Open AlighieriX opened this issue 3 months ago • 1 comments

Checklist

  • [ ] The issue exists after disabling all extensions
  • [x] The issue exists on a clean installation of webui
  • [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [ ] The issue exists in the current version of the webui
  • [ ] The issue has not been reported before recently
  • [ ] The issue has been reported before but has not been fixed yet

What happened?

When I try to run the bat file it says no agents were found then errors out when I try to generate an image

Steps to reproduce the problem

Run the bat file

What should have happened?

The program should run and be able to generate files

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2025-09-16-20-09.json

Console logs

venv "K:\New SD\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-43-g1ad6edf1
Commit hash: 1ad6edf170c2c4307e0d2400f760a149e621dc38
ROCm: no agent was found
ROCm: version=6.2
ZLUDA support: experimental
ZLUDA load: path='K:\New SD\stable-diffusion-webui-amdgpu\.zluda' nightly=False
W0916 16:08:44.926450 14832 venv\Lib\site-packages\torch\distributed\elastic\multiprocessing\redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.
K:\New SD\stable-diffusion-webui-amdgpu\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
K:\New SD\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --opt-sub-quad-attention --no-half- --disable-nan-check --autolaunch
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
K:\New SD\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\onnx\_internal\registration.py:162: OnnxExporterWarning: Symbolic function 'aten::scaled_dot_product_attention' already registered for opset 14. Replacing the existing function with new function. This is unexpected. Please report it on https://github.com/pytorch/pytorch/issues.
  warnings.warn(
ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute 'ORTPipelinePart'
Loading weights [6ce0161689] from K:\New SD\stable-diffusion-webui-amdgpu\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: K:\New SD\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 33.6s (prepare environment: 55.1s, initialize shared: 1.6s, load scripts: 1.3s, create ui: 0.5s, gradio launch: 0.7s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 6.1s (load weights from disk: 0.1s, create model: 1.2s, apply weights to model: 3.7s, apply half(): 0.5s, calculate empty prompt: 0.5s).

Additional information

No response

AlighieriX avatar Sep 16 '25 20:09 AlighieriX

Hey, you installed the wrong HIP SDK Version. 6.2 doesnt support gfx12 crads nativly. You need to uninstall everything from HIP SDK 6.2 and then install 6.4 instead. A PC restart is required after doing this. Then also important, remove --no-half- --disable-nan-check from the webui-user.bat and save. After that delete the venv and the .zluda folder (boath are located in the stable-diffusion-webui-amdgpu folder). Then relaunch the webui-user.bat

CS1o avatar Sep 19 '25 06:09 CS1o

Hello, I'm experiencing the same end result as OP 'No HIP GPUs are available' running a 9070XT, though I'm on Linux. Python 3.11
rocm-core & rocm-hip-sdk 7.1.1 ROCm seems happy but I suspect Torch is not. Does current webui.sh support ROCm 7.x or is additional work needed? Grabbed e61adddd which exits with raise NotImplementedError("TODO") This message is why I suspect that I should have blocked that recent ROCm 7.x update. grrr I can launch using commit 72f1fe43 (demonstrates the 'no GPU' issue) so here's the output of that in case I've overlooked something else on my end.

Python 3.11.14 (main, Nov 11 2025, 20:28:59) [GCC 15.2.1 20250813]
Version: v1.10.1-amd-49-g72f1fe43
Commit hash: 72f1fe431d010765b618e2b0bc4f4584957029b8
WARNING: you should not skip torch test unless you want CPU to work.
ROCm: AMD toolkit detected
ROCm: agents=['gfx1201']
ROCm: version=7.1, using agent gfx1201
/home/mahashel/.venvs/stable-diff/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_validation.py:113: UserWarning: WARNING: failed to get cudart_version from onnxruntime build info.
  warnings.warn("WARNING: failed to get cudart_version from onnxruntime build info.")
/home/mahashel/.venvs/stable-diff/lib/python3.11/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
/home/mahashel/.venvs/stable-diff/lib/python3.11/site-packages/pytorch_lightning/utilities/distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --skip-torch-cuda-test
Warning: caught exception 'No HIP GPUs are available', memory monitor disabled
/home/mahashel/.venvs/stable-diff/lib/python3.11/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn(
ONNX: version=1.19.2 provider=ROCMExecutionProvider, available=['CUDAExecutionProvider', 'CPUExecutionProvider']
Loading weights [31e35c80fc] from /home/mahashel/code/stable-diffusion-webui-amdgpu/models/Stable-diffusion/sd_xl_base_1.0.safetensors
Running on local URL:  http://127.0.0.1:7860

I'm happy to migrate this to a new Issue if it's considered a very different topic, despite the same outcome as OP

mahashel avatar Dec 07 '25 18:12 mahashel

Hello, I'm experiencing the same end result as OP 'No HIP GPUs are available' running a 9070XT, though I'm on Linux. Python 3.11 rocm-core & rocm-hip-sdk 7.1.1 ROCm seems happy but I suspect Torch is not. Does current webui.sh support ROCm 7.x or is additional work needed? Grabbed e61adddd which exits with raise NotImplementedError("TODO") This message is why I suspect that I should have blocked that recent ROCm 7.x update. grrr I can launch using commit 72f1fe43 (demonstrates the 'no GPU' issue) so here's the output of that in case I've overlooked something else on my end.

Python 3.11.14 (main, Nov 11 2025, 20:28:59) [GCC 15.2.1 20250813]
Version: v1.10.1-amd-49-g72f1fe43
Commit hash: 72f1fe431d010765b618e2b0bc4f4584957029b8
WARNING: you should not skip torch test unless you want CPU to work.
ROCm: AMD toolkit detected
ROCm: agents=['gfx1201']
ROCm: version=7.1, using agent gfx1201
/home/mahashel/.venvs/stable-diff/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_validation.py:113: UserWarning: WARNING: failed to get cudart_version from onnxruntime build info.
  warnings.warn("WARNING: failed to get cudart_version from onnxruntime build info.")
/home/mahashel/.venvs/stable-diff/lib/python3.11/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
/home/mahashel/.venvs/stable-diff/lib/python3.11/site-packages/pytorch_lightning/utilities/distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --skip-torch-cuda-test
Warning: caught exception 'No HIP GPUs are available', memory monitor disabled
/home/mahashel/.venvs/stable-diff/lib/python3.11/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn(
ONNX: version=1.19.2 provider=ROCMExecutionProvider, available=['CUDAExecutionProvider', 'CPUExecutionProvider']
Loading weights [31e35c80fc] from /home/mahashel/code/stable-diffusion-webui-amdgpu/models/Stable-diffusion/sd_xl_base_1.0.safetensors
Running on local URL:  http://127.0.0.1:7860

I'm happy to migrate this to a new Issue if it's considered a very different topic, despite the same outcome as OP

Linux support is removed during rewriting ROCm detection. I'm planning to add Linux support again in better way. Currently, you need to manually install ROCm PyTorch on Linux. https://pytorch.org/get-started/locally/

lshqqytiger avatar Dec 08 '25 04:12 lshqqytiger

Linux support is removed during rewriting ROCm detection. I'm planning to add Linux support again in better way. Currently, you need to manually install ROCm PyTorch on Linux. https://pytorch.org/get-started/locally/

Understood. Cleaned up my venv and manually installed the latest version of PyTorch (which supports ROCm 7.1) and I'm back in business. Runs great on commit# 72f1fe43 until the return of official Linux support. Looking forward to the new ROCm detection on Linux. Some cool stuff from AMD on that subject lately.

mahashel avatar Dec 11 '25 23:12 mahashel