stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Unable to select a checkpoint model when starting from a clean or existing installation.
Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
Can't choose a checkpoint to run Stable Diffusion. This only happened after I switched out my old power supply for an upcoming 4090 I ordered, so it may be hardware related, or it could just be coincidental.
Steps to reproduce the problem
-
Attempt to select checkpoint...
-
...fail.
What should have happened?
Checkpoint should have been automatically loaded, so I can start my next image generation.
What browsers do you use to access the UI ?
Microsoft Edge
Sysinfo
Console logs
venv "M:\z\venv\Scripts\Python.exe"
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.9.0
Commit hash: <none>
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [ff3a2961a8] from M:\z\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
loading stable diffusion model: SafetensorError
Traceback (most recent call last):
File "C:\Users\Ande\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\Ande\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Ande\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "M:\z\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "M:\z\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "M:\z\modules\sd_models.py", line 620, in get_sd_model
load_model()
File "M:\z\modules\sd_models.py", line 705, in load_model
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
File "M:\z\modules\sd_models.py", line 330, in get_checkpoint_state_dict
res = read_state_dict(checkpoint_info.filename)
File "M:\z\modules\sd_models.py", line 304, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "M:\z\venv\lib\site-packages\safetensors\torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
Stable diffusion model failed to load
Applying attention optimization: Doggettx... done.
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 7.8s (prepare environment: 1.7s, import torch: 2.8s, import gradio: 0.7s, setup paths: 0.8s, initialize shared: 0.3s, other imports: 0.4s, load scripts: 0.4s, create ui: 0.4s, gradio launch: 0.2s).
Loading weights [ff3a2961a8] from M:\z\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
loading stable diffusion model: SafetensorError
Traceback (most recent call last):
File "C:\Users\Ande\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\Ande\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "M:\z\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "M:\z\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "M:\z\modules\ui.py", line 1154, in <lambda>
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
File "M:\z\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "M:\z\modules\sd_models.py", line 620, in get_sd_model
load_model()
File "M:\z\modules\sd_models.py", line 705, in load_model
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
File "M:\z\modules\sd_models.py", line 330, in get_checkpoint_state_dict
res = read_state_dict(checkpoint_info.filename)
File "M:\z\modules\sd_models.py", line 304, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "M:\z\venv\lib\site-packages\safetensors\torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
Stable diffusion model failed to load
changing setting sd_model_checkpoint to v1-5-pruned-emaonly.safetensors [ff3a2961a8]: AttributeError
Traceback (most recent call last):
File "M:\z\modules\options.py", line 165, in set
option.onchange()
File "M:\z\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "M:\z\modules\initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "M:\z\modules\sd_models.py", line 860, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "M:\z\modules\sd_models.py", line 793, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "M:\z\modules\sd_models.py", line 662, in send_model_to_cpu
if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'
Loading weights [ff3a2961a8] from M:\z\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
changing setting sd_model_checkpoint to v1-5-pruned-emaonly.safetensors [ff3a2961a8]: AttributeError
loading stable diffusion model: SafetensorError
Traceback (most recent call last):
File "M:\z\modules\options.py", line 165, in set
option.onchange()
File "M:\z\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
Traceback (most recent call last):
File "C:\Users\Ande\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "M:\z\modules\initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "C:\Users\Ande\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "M:\z\modules\sd_models.py", line 860, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "M:\z\modules\sd_models.py", line 793, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "M:\z\modules\sd_models.py", line 662, in send_model_to_cpu
if m.lowvram:
File "M:\z\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "M:\z\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "M:\z\modules\ui.py", line 1154, in <lambda>
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
AttributeError: 'NoneType' object has no attribute 'lowvram'
File "M:\z\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "M:\z\modules\sd_models.py", line 620, in get_sd_model
load_model()
File "M:\z\modules\sd_models.py", line 705, in load_model
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
File "M:\z\modules\sd_models.py", line 330, in get_checkpoint_state_dict
res = read_state_dict(checkpoint_info.filename)
File "M:\z\modules\sd_models.py", line 304, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "M:\z\venv\lib\site-packages\safetensors\torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
Stable diffusion model failed to load
Additional information
No response
Loading weights [ff3a2961a8] from M:\z\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
The hash (ff3a2961a8) for v1-5-pruned-emaonly.safetensors should be 6ce0161689. The file may be corrupt.
Deleting it will cause it to be re-downloaded the next time webui is started (unless other models are present), or it can be manually downloaded here.
Well this is odd. I did what you said, and it fixed the clean install...the existing install on the other hand is still giving me trouble, and I fixed it by copying the 1.5 model over to it, and it started to work. I have over a dozen other checkpoints to use, and none of them could have been selected, so does this mean that the default 1.5 model is required to make the program run, even if you're not actively using it?
No, it can safely be deleted.
If it can be deleted then why did replacing it with the uncorrupted version fix it? Does this mean that if some model get's corrupted, that the entire thing will grind to a halt, or is it something else? I'm trying to figure this out should this same issue reappear in the future.
- https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15600
So it's not the default model, but whatever model is currently pointed to first at program load that is the culprit...interesting.