RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
Checklist
- [x] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
Stopped working a few days ago after updating amd drivers even after downgrading to previous version of the drivers, a fresh install of SD and restating my pc it still stays, I found a few people online with similar problems but at last trying trying a few had no result.
Steps to reproduce the problem
Try to generate any image any size.
What should have happened?
Noting just a timeout at 0.4 seconds with the error
RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same.
What browsers do you use to access the UI ?
Other
Sysinfo
Console logs
venv "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-36-g679c645e
Commit hash: 679c645ec84e40dd14d527dbeb03fab259087187
ROCm: agents=['gfx1100']
ROCm: version=6.2, using agent gfx1100
ZLUDA support: experimental
ZLUDA load: path='X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\.zluda' nightly=False
X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\cuda\__init__.py:936: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\c10\cuda\CUDAFunctions.cpp:109.)
r = torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda
Warning: caught exception 'CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.', memory monitor disabled
ONNX: version=1.22.0 provider=CPUExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [8463ca6405] from X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\models\Stable-diffusion\revAnimated_v2Rebirth.safetensors
Creating model from config: X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 9.4s (prepare environment: 11.3s, initialize shared: 0.7s, load scripts: 0.8s, create ui: 0.4s, gradio launch: 0.5s).
creating model quickly: OSError
Traceback (most recent call last):
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 342, in cached_file
resolved_file = hf_hub_download(
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1008, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1115, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1645, in _raise_on_head_call_error
raise head_call_error
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1533, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1450, in get_hf_file_metadata
r = _request_wrapper(
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 286, in _request_wrapper
response = _request_wrapper(
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 310, in _request_wrapper
hf_raise_for_status(response)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 459, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6834e7ae-073ada236345ad1d250c02f0;5edc06ea-ddb4-4ced-82a4-a4cca66f18d4)
Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
Invalid username or password.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
sd_model = instantiate_from_config(sd_config.model, state_dict)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
return constructor(**params)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in __init__
self.transformer = CLIPTextModel.from_pretrained(version)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper
return func(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3540, in from_pretrained
resolved_config_file = cached_file(
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 365, in cached_file
raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
Failed to create model quickly; will retry using slow method.
Applying attention optimization: InvokeAI... done.
Model loaded in 3.2s (load weights from disk: 0.4s, create model: 1.2s, apply weights to model: 1.2s, apply half(): 0.2s, calculate empty prompt: 0.2s).
0%| | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(l4q3693o74ofdba)', <gradio.routes.Request object at 0x000001F0444DE9B0>, 'A dancing, dog, ', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', False, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, True, 0.85, 0.6, 4, False, False, 512, 64, True, True, True, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\processing.py", line 1083, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\processing.py", line 1441, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_hijack_unet.py", line 50, in apply_model
result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 36, in __call__
return self.__orig_func(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
x = layer(x)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
File "X:\stable-diffusion-webui-amdgpu-master\sd zlude\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\conv.py", line 549, in _conv_forward
return F.conv2d(
RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
Additional information
Any advice/help would be appreciated, I'm not to good with computer's so please be patient.
Hey, try downgrading the AMD Adrenalin Drivers to 25.4.1 As 25.5.1 and higher have problems.
Then also upgrade your Python to 3.10.11 64bit. Then delete the venv folder and relaunch the webui-user.bat
The same issue with 9060 xt and Adrenalin 25.6.1. The problem is - I can't install older version, because it not supported by 25.4.1
@gogaletyaev have you downloaded the correct gfx files for gfx1200 and placed them in your rocm library? Also please provide a full cmd log. For rx9060 also see here: https://github.com/lshqqytiger/ZLUDA/issues/116