stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: impossible to change models
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
- A lot of errors on startup.
- Impossible to change models in WebUI
Steps to reproduce the problem
- Try to start the WebUI
What should have happened?
On startup I get a long list of errors. The log is below. Then the interface starts. But if I try to change the model to any SDXL, I get a long list of errors again and see in the console that it is trying to download strange file ip_pytorch_model.bin weighing 10GB
Sysinfo
What browsers do you use to access the UI ?
Google Chrome
Console logs
creating model quickly: OSError
Traceback (most recent call last):
File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 261, in hf_raise_for_status
response.raise_for_status()
File "C:\stable-diffusion-webui\venv\lib\site-packages\requests\models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 429, in cached_file
resolved_file = hf_hub_download(
File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1195, in hf_hub_download
metadata = get_hf_file_metadata(
File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1541, in get_hf_file_metadata
hf_raise_for_status(r)
File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 293, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6502842e-038d35a170ef5a983f0dfb8f;375cda1b-e897-4693-aef7-667164338587)
Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\itech\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\itech\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\itech\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\stable-diffusion-webui\modules\initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "C:\stable-diffusion-webui\modules\shared_items.py", line 110, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\stable-diffusion-webui\modules\sd_models.py", line 499, in get_sd_model
load_model()
File "C:\stable-diffusion-webui\modules\sd_models.py", line 602, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1650, in __init__
super().__init__(concat_keys, *args, **kwargs)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1515, in __init__
super().__init__(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in __init__
self.transformer = CLIPTextModel.from_pretrained(version)
File "C:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
File "C:\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py", line 2377, in from_pretrained
resolved_config_file = cached_file(
File "C:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
Failed to create model quickly; will retry using slow method.
Running on local URL: http://127.0.0.1:7860
### Additional information
_No response_
I had that same problem in windows today as well, after cloning the git. I was testing to see if I could get it to work with python 3.11 and cuda121, pytorch 2.2... and that was the result. Make sure you install the venv with python 3.10. If you have a newer version, or multiple versions of python installed, you can get around that problem by installing python 3.10.6, and running this at command prompt; you may have to delete your venv first
C:\Users\(username)\AppData\Local\Programs\Python\Python310\python.exe -m venv C:\stable-diffusion-webui\venv
Otherwise, try deleting your venv, and creating a new one., or deleting the packages listed with the errors
I have Python 3.10.6. I have also already tried deleting /venv/ in the WebUI folder. Nothing helps.
Do you have a way to reproduce the issue?
You can try to move the models completely outside stable-diffusion-webui folder, this error could be webui's "falling back to previously used model" failing to load
I solved the problem this way: I have a copy of the WebUI on another drive with auto-update disabled at startup (a working copy in case WebUI becomes inoperable after the next update). I copied the /venv/ folder from there and the above errors went away.
I have the same issue. Updated my extensions today and it stopped working. Can't load any checkpoints and always tries to downoad model.safetensors despite there being one there already. The command line arg to stop it from downloading a model doesn't work. I can get the UI up in debug mode.
So I figured it out.. I had to let it download the new clip model. It ate my config.json but it's back.
and now XL also downloads a 10gb ip_pytorch_model.bin to an unknown folder. If I let it finish it will probably load but that's a lot to download through unstable PT downloader with no resume.
I have no idea what to do :( It's again.
UPD: Just discovered that the problem only occurs when the Clip Interrogator extension is installed. After removing it from the extensions folder, the problem went away.
I have no idea how the Clip Interrogator extension can cause errors and an attempt to download a 10Gb file when trying to change the model to SDXL in the WebUI. This even as formulated looks extremely strange :) But the fact is: removing the extension fixed the problem.
The 10gb model is a CLIP model. The errors that cause it come from repositories/stable-diffusion. I will try to disable the clip extension and see if the error goes away.
I found the bug!
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/sd_disable_initialization.py
def CLIPTextModel_from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs):
res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
res.name_or_path = pretrained_model_name_or_path
return res
It is supposed to disable loading the clip model but it loads anyway so no point.
fix by:
def CLIPTextModel_from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs):
res = self.CLIPTextModel_from_pretrained(pretrained_model_name_or_path, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
res.name_or_path = pretrained_model_name_or_path
return res
Thanks for your hard working. It is very useful.
This worked you're a genius @Ph0rk0z
UPD: Just discovered that the problem only occurs when the Clip Interrogator extension is installed. After removing it from the extensions folder, the problem went away.
I have no idea how the Clip Interrogator extension can cause errors and an attempt to download a 10Gb file when trying to change the model to SDXL in the WebUI. This even as formulated looks extremely strange :) But the fact is: removing the extension fixed the problem.
Hi, are you still there? Could you expand on how to remove the extension? I haven't installed any extensions, and there is nothing in the stable-diffusion-webui-master\stable-diffusion-webui-master\extensions folder.
I found the bug!
fantastic work. i wiped my venv today and upon rebuild got the same error messages. substituting "pretrained_model_name_or_path" in place of None worked perfectly!
See my cross-post here:
TheLastBen/fast-stable-diffusion#2937 (comment)
I think it will solve the issue as it did for me.
Enjoy
I found the bug!
I have no idea how but this fixed it, thank you so much. Btw, does someone know where the 10gb file goes to? Do I have it in my machine? Can I delete it?
Questioning as a person that has zero knowledge on all of this and just recently set up SD with zluda for amd. And when I say zero knowledge I mean that I even had to search for how to change a directory in CMD lol, at that level. I barely understood and fixed this bug with the code fix.
For this stable-diffusion-webui package, the places with the GBs of files tend to be the "
As an aside, I've moved away from this "AUTOMATIC1111/stable-diffusion-webui" project as it is old and out-of-date. Intel has done a marvelous job reinventing this in their OpenVINO Toolkit:
https://github.com/openvinotoolkit
https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/text-to-image-genai https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/stable-diffusion-v3
***** EDIT ***** - "stable-diffusion-v3" seems to want 32GB and won't render on my doggy 16GB NVidia-less laptop.
"text-to-image-genai" works on everything I've got. A little slow on that laptop but it works. Also, the gradio web screen has about as many features as "stable-diffusion-v3" (a ton less than "AUTOMATIC1111/stable-diffusion-webui") so I'd start with "text-to-image-genai"
The "stable-diffusion-v3" notebook is the best replacement for "AUTOMATIC1111/stable-diffusion-webui". Use the default "tensorart/stable-diffusion-3.5-medium-turbo" model as the "turbo" is much faster at rendering. Also, I had to run "optimum-cli" command string from a command prompt so I could see progress and rerun as needed till it completed its conversion from huggingface to OpenVINO format. Once converted, the "optimum-cli" step is skipped.
Enjoy the OpenVINO notebooks. Lots of great stuff there that runs for me on Python 3.12 with up-to-date packages. Really, current versions for almost everything (mxnet is my only outlier -- not required here).