stable-diffusion-webui-amdgpu icon indicating copy to clipboard operation
stable-diffusion-webui-amdgpu copied to clipboard

[Bug]: Cant load SDXL model, crashes after update

Open fifskank opened this issue 1 year ago • 4 comments

Checklist

  • [ ] The issue exists after disabling all extensions
  • [x] The issue exists on a clean installation of webui
  • [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [x] The issue exists in the current version of the webui
  • [ ] The issue has not been reported before recently
  • [ ] The issue has been reported before but has not been fixed yet

What happened?

cant load SDXL models, crashes eery time i try to load a SDXL model

Steps to reproduce the problem

double click on webUI.bat with arguments:

set COMMANDLINE_ARGS= --skip-torch-cuda-test --skip-version-check --no-half-vae --upcast-sampling --opt-split-attention --disable-nan-check --use-directml

What should have happened?

i used this option before and it loaded smoothly, FP8 weight (Use FP8 to store Linear/Conv layers' weight. Require pytorch>=2.1.0.)

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2025-01-27-00-12.json

Console logs

venv "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-21-g3cf53018
Commit hash: 3cf530186f76d0005e4c791cca9a0d8f4aa013c4
WARNING: you should not skip torch test unless you want CPU to work.
D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --skip-torch-cuda-test --skip-version-check --no-half-vae --upcast-sampling --opt-split-attention --disable-nan-check --use-directml
ONNX: version=1.20.1 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Loading weights [6ce0161689] from D:\SDXL_auto\stable-diffusion-webui-amdgpu\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: D:\SDXL_auto\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml
Startup time: 9.2s (prepare environment: 12.4s, initialize shared: 1.4s, load scripts: 0.5s, create ui: 0.4s, gradio launch: 0.6s).
creating model quickly: OSError
Traceback (most recent call last):
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
    resolved_file = hf_hub_download(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 860, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 967, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1482, in _raise_on_head_call_error
    raise head_call_error
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1374, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1294, in get_hf_file_metadata
    r = _request_wrapper(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 278, in _request_wrapper
    response = _request_wrapper(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 302, in _request_wrapper
    hf_raise_for_status(response)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6796d018-41134195540bc08a09341e67;7346f6b0-b5ca-47b0-8840-5b3f7c07251d)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\tANK_\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\tANK_\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\tANK_\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3464, in from_pretrained
    resolved_config_file = cached_file(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
Applying attention optimization: Doggettx... done.
Model loaded in 16.0s (load weights from disk: 0.8s, create model: 1.8s, apply weights to model: 12.9s, apply half(): 0.2s, calculate empty prompt: 0.1s).
Reusing loaded model v1-5-pruned-emaonly.safetensors [6ce0161689] to load animagineXLV31_v31.safetensors [e3c47aedb0]
Loading weights [e3c47aedb0] from D:\SDXL_auto\stable-diffusion-webui-amdgpu\models\Stable-diffusion\animagineXLV31_v31.safetensors
Creating model from config: D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
creating model quickly: OSError
Traceback (most recent call last):
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
    resolved_file = hf_hub_download(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 860, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 967, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1482, in _raise_on_head_call_error
    raise head_call_error
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1374, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1294, in get_hf_file_metadata
    r = _request_wrapper(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 278, in _request_wrapper
    response = _request_wrapper(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 302, in _request_wrapper
    hf_raise_for_status(response)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6796d032-341dc927051541af38d3b615;89686bc0-0799-4a68-9c8a-a2e28b38f291)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\tANK_\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\tANK_\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\ui_settings.py", line 316, in <lambda>
    fn=lambda value, k=k: self.run_settings_single(value, key=k),
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\ui_settings.py", line 95, in run_settings_single
    if value is None or not opts.set(key, value):
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\options.py", line 165, in set
    option.onchange()
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 14, in f
    res = func(*args, **kwargs)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 992, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3464, in from_pretrained
    resolved_config_file = cached_file(
  File "D:\SDXL_auto\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
D:\SDXL_auto\stable-diffusion-webui-amdgpu\modules\safe.py:156: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  return unsafe_torch_load(filename, *args, **kwargs)
[F126 18:16:50.000000000 dml_util.cc:118] Invalid or unsupported data type Float8_e4m3fn.
Press any key to continue . . .

Additional information

No response

fifskank avatar Jan 27 '25 00:01 fifskank

DirectML never has had FP8 data type. Are you sure you used FP8 on DirectML before?

lshqqytiger avatar Jan 27 '25 03:01 lshqqytiger

DirectML never has had FP8 data type. Are you sure you used FP8 on DirectML before?

Yes sir, ive been working this way for the past 3 years now!, i used to load SDXL model but it crashed, after i went though the setting found the FP8 for SDXL models option, and boom worked fine,I started using this build you provide, then i upgraded to 1.6.0 RC and then upgraded to 1.8.0 RC worked fine, so i started to work with SDXL but this weekend some updates happened and it broke,

fifskank avatar Jan 27 '25 14:01 fifskank

@lshqqytiger we got a issue report of I believe the same issue they're using stable-diffusion-webui-amdgpu but missreported to a1111

  • https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16821

for my brief testing this seems to be caused by updated to transformers pacakge in a1111 we pin the version to transformers==4.30.2 but I've noticed that you seems to have removed the specific version because of some dependency reasons commit https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu/commit/d8b7380b18d044d2ee38695c58bae3a786689cf3

I'm guessing by your comment https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu/issues/576#issuecomment-2614803560 you are not able to reproduce the issue I believe this is because as you did not specify a specific version for transformest, which means that new install will be using newest version of the transformer package, but as your local instance most likely is on a old version of transformes, you were not able to repouduce thes issue https://pypi.org/project/transformers/#history I also did do a quick test of using the transformers==4.48.2 on a1111 and similar issues was reproduce

note this is not limited to SDXL I was teesting with SD1.5

this I believe is the actually cause of the issue

w-e-w avatar Jan 31 '25 06:01 w-e-w

so i got the transformers=4.25.1 is the correct version BUT im missingthe correctr version for the diffuser ??? can you hellp out with that?? seems the requirements.txt is asking for diffuser=0.31.0

fifskank avatar Feb 03 '25 17:02 fifskank