stable-diffusion-webui-amdgpu icon indicating copy to clipboard operation
stable-diffusion-webui-amdgpu copied to clipboard

[Bug]: RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

Open SkadiAegis opened this issue 7 months ago • 1 comments

Checklist

  • [x] The issue exists after disabling all extensions
  • [x] The issue exists on a clean installation of webui
  • [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [x] The issue exists in the current version of the webui
  • [ ] The issue has not been reported before recently
  • [ ] The issue has been reported before but has not been fixed yet

What happened?

When i tried to generate an image the "RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float" error appears in the console and dosen't generate the image. This issue started with the 25.5.1 driver version. I've tried with a clean install and the same happens. It seems its the same error as in #604 .

Steps to reproduce the problem

1.Press "Generate" 2.The console throws the error and dosen't generate an image

What should have happened?

Image should get generated successfully.

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2025-05-10-16-46.json

Console logs

From https://github.com/lshqqytiger/stable-diffusion-webui-directml
 * branch              HEAD       -> FETCH_HEAD
Already up to date.
venv "E:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-36-g679c645e
Commit hash: 679c645ec84e40dd14d527dbeb03fab259087187
ROCm: agents=['gfx1100', 'gfx1036']
ROCm: version=6.2, using agent gfx1100
ZLUDA support: experimental
ZLUDA load: path='E:\stable-diffusion-webui-amdgpu\.zluda' nightly=False
E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\cuda\__init__.py:936: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\c10\cuda\CUDAFunctions.cpp:109.)
  r = torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --upcast-sampling
Warning: caught exception 'CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.', memory monitor disabled
ONNX: version=1.21.0 provider=CPUExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
[-] ADetailer initialized. version: 25.3.0, num models: 17
Forge: False, reForge: False
Loading weights [7c97ecf786] from E:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\ponyRealism_v22MainVAE.safetensors
Creating model from config: E:\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
creating model quickly: OSError
Traceback (most recent call last):
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 409, in hf_raise_for_status
    response.raise_for_status()
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 342, in cached_file
    resolved_file = hf_hub_download(
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 961, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1068, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1596, in _raise_on_head_call_error
    raise head_call_error
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1484, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1401, in get_hf_file_metadata
    r = _request_wrapper(
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 285, in _request_wrapper
    response = _request_wrapper(
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 309, in _request_wrapper
    hf_raise_for_status(response)
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 459, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-681f8198-5c2859f866d902e2118eecfe;8d13fdb1-20b2-4375-b626-322de8522b2a)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\vsnch\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\vsnch\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\vsnch\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "E:\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "E:\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "E:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "E:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "E:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "E:\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper
    return func(*args, **kwargs)
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3540, in from_pretrained
    resolved_config_file = cached_file(
  File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 365, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
Loading VAE weights specified in settings: E:\stable-diffusion-webui-amdgpu\models\VAE\fixFP16ErrorsSDXLLowerMemoryUse_v10.safetensors
Applying attention optimization: InvokeAI... done.
*** Error loading embedding bad-image-v2-39000.pt
    Traceback (most recent call last):
      File "E:\stable-diffusion-webui-amdgpu\modules\textual_inversion\textual_inversion.py", line 209, in load_from_dir
        self.load_from_file(fullfn, fn)
      File "E:\stable-diffusion-webui-amdgpu\modules\textual_inversion\textual_inversion.py", line 180, in load_from_file
        data = torch.load(path, map_location="cpu")
      File "E:\stable-diffusion-webui-amdgpu\modules\safe.py", line 108, in load
        return load_with_extra(filename, *args, extra_handler=global_extra_handler, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\modules\safe.py", line 156, in load_with_extra
        return unsafe_torch_load(filename, *args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\serialization.py", line 1470, in load
        raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
    _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
        (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
        (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
        WeightsUnpickler error: Unsupported global: GLOBAL torch.nn.modules.container.ParameterDict was not an allowed global by default. Please use `torch.serialization.add_safe_globals([ParameterDict])` or the `torch.serialization.safe_globals([ParameterDict])` context manager to allowlist this global if you trust this class/function.

    Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.

---
Model loaded in 23.1s (load weights from disk: 0.3s, create model: 5.9s, apply weights to model: 14.6s, apply half(): 0.2s, load VAE: 0.8s, load textual inversion embeddings: 0.3s, calculate empty prompt: 0.9s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 99.9s (prepare environment: 90.6s, initialize shared: 1.4s, list SD models: 0.2s, load scripts: 0.9s, create ui: 23.4s, gradio launch: 0.2s).
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(f5qzq1xt6qpzhw9)', <gradio.routes.Request object at 0x0000021088AC2530>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '', '') {}
    Traceback (most recent call last):
      File "E:\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "E:\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "E:\stable-diffusion-webui-amdgpu\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "E:\stable-diffusion-webui-amdgpu\modules\processing.py", line 1083, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "E:\stable-diffusion-webui-amdgpu\modules\processing.py", line 1441, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "E:\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\stable-diffusion-webui-amdgpu\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "E:\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
        return func(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\modules\sd_models_xl.py", line 43, in apply_model
        return self.model(x, t, cond)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "E:\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 34, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\modules\sd_hijack_unet.py", line 50, in apply_model
        result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 993, in forward
        h = module(h, emb, context)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 98, in forward
        x = layer(x, emb)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 317, in forward
        return checkpoint(
      File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 167, in checkpoint
        return func(*inputs)
      File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 329, in _forward
        h = self.in_layers(x)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\container.py", line 250, in forward
        input = module(input)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 275, in forward
        return super().forward(x.float()).type(x.dtype)
      File "E:\stable-diffusion-webui-amdgpu\extensions-builtin\Lora\networks.py", line 614, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\normalization.py", line 313, in forward
        return F.group_norm(input, self.num_groups, self.weight, self.bias, self.eps)
      File "E:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\functional.py", line 2965, in group_norm
        return torch.group_norm(
    RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

---

Additional information

This error started happening when i updated to the driver version 25.5.1

SkadiAegis avatar May 10 '25 16:05 SkadiAegis

Also using driver 25.5.1, I get same error and haven't been able to generate images on this driver version.

venv "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-36-g679c645e
Commit hash: 679c645ec84e40dd14d527dbeb03fab259087187
ROCm: agents=['gfx1100']
ROCm: version=6.2, using agent gfx1100
ZLUDA support: experimental
ZLUDA load: path='C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\.zluda' nightly=False
C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\cuda\__init__.py:936: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\c10\cuda\CUDAFunctions.cpp:109.)
  r = torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --listen
Warning: caught exception 'CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.', memory monitor disabled
ONNX: version=1.21.1 provider=DmlExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
[-] ADetailer initialized. version: 25.3.0, num models: 15
ControlNet preprocessor location: C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\extensions\sd-webui-controlnet\annotator\downloads
2025-05-11 07:11:14,047 - ControlNet - INFO - ControlNet v1.1.455
[sd-webui-freeu] Controlnet support: *enabled*
Loading weights [d91d35736d] from C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\models\Stable-diffusion\SDXL\juggernautXL_juggernautX.safetensors
Creating model from config: C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
2025-05-11 07:11:14,634 - ControlNet - INFO - ControlNet UI callback registered.
C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\extensions\sd-webui-check-tensors\scripts\check-tensors.py:21: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  with gr.Row().style(equal_height=False):
Running on local URL:  http://0.0.0.0:7860
creating model quickly: OSError
Traceback (most recent call last):
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 409, in hf_raise_for_status
    response.raise_for_status()
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 342, in cached_file
    resolved_file = hf_hub_download(
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1008, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1115, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1643, in _raise_on_head_call_error
    raise head_call_error
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1531, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1448, in get_hf_file_metadata
    r = _request_wrapper(
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 286, in _request_wrapper
    response = _request_wrapper(
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 310, in _request_wrapper
    hf_raise_for_status(response)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 459, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-682093e5-7876db357e73992e38e886fb;2ffd4184-5e24-437f-a813-d72ace68f605)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\filepath\\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\filepath\\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\filepath\\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper
    return func(*args, **kwargs)
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3540, in from_pretrained
    resolved_config_file = cached_file(
  File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 365, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.

To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 25.2s (prepare environment: 29.7s, initialize shared: 0.8s, list SD models: 0.5s, load scripts: 2.0s, create ui: 0.7s, gradio launch: 4.2s).
Applying attention optimization: sdp... done.
Model loaded in 11.1s (create model: 4.9s, apply weights to model: 4.8s, apply half(): 0.2s, load textual inversion embeddings: 0.3s, calculate empty prompt: 0.8s).
  0%|                                                                                                                                                                                                                             | 0/43 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(sn6hczsu4h8kd8c)', <gradio.routes.Request object at 0x0000012570181510>, '', '', ['XL-TST Hamster Warrior Sunny Forrest'], 1, 1, 5.5, 1216, 832, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 43, 'DPM++ 2M', 'Karras', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', False, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, True, 0.85, 0.6, 4, False, False, 512, 64, True, True, True, False, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), False, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\processing.py", line 1083, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\processing.py", line 1441, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
        return func(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_samplers_cfg_denoiser.py", line 268, in forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_models_xl.py", line 43, in apply_model
        return self.model(x, t, cond)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 34, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_hijack_unet.py", line 50, in apply_model
        result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 993, in forward
        h = module(h, emb, context)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 98, in forward
        x = layer(x, emb)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 317, in forward
        return checkpoint(
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 167, in checkpoint
        return func(*inputs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 329, in _forward
        h = self.in_layers(x)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\container.py", line 250, in forward
        input = module(input)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 275, in forward
        return super().forward(x.float()).type(x.dtype)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\extensions-builtin\Lora\networks.py", line 614, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\normalization.py", line 313, in forward
        return F.group_norm(input, self.num_groups, self.weight, self.bias, self.eps)
      File "C:\filepath\STABLE DIFFUSION\WEBUI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\functional.py", line 2965, in group_norm
        return torch.group_norm(
    RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

pw405 avatar May 11 '25 12:05 pw405

This issue should be fixed after ZLUDA v3.9.5.

lshqqytiger avatar Jul 25 '25 07:07 lshqqytiger