stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Cannot switch checkpoints
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
I cannot change the checkpoint in the WebUI anymore since updating today. This is the error message I get:
LatentDiffusion: Running in eps-prediction mode
Traceback (most recent call last):
File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict
output = await app.blocks.process_api(
File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api
result = await self.call_function(fn_index, inputs, iterator)
File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "E:\sd\stable-diffusion-webui\modules\ui.py", line 443, in update_token_counter
tokens, token_count, max_length = max([model_hijack.tokenize(prompt) for prompt in prompts], key=lambda args: args[1])
File "E:\sd\stable-diffusion-webui\modules\ui.py", line 443, in
Then, I cannot generate any images. I get this error message:
Error completing request Arguments: ('ppp', '', 'None', 'None', 36, 2, False, False, 1, 6, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 0, 0, 0, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, '
Combinations
\n\n Choose a number of terms from a list, in this case we choose two artists: \n{2$$artist1|artist2|artist3}
\n\n If $$ is not provided, then 1$$ is assumed.
\n\n If the chosen number of terms is greater than the available terms, then some terms will be duplicated, otherwise chosen terms will be unique. This is useful in the case of wildcards, e.g.\n
{2$$artist}
is equivalent to {2$$artist|artist}
\n\n A range can be provided:\n
{1-3$$artist1|artist2|artist3}
\n In this case, a random number of artists between 1 and 3 is chosen.
\n\n Wildcards can be used and the joiner can also be specified:\n
{{1-$$and$$adjective}}
\n\n Here, a random number between 1 and 3 words from adjective.txt will be chosen and joined together with the word 'and' instead of the default comma.\n\n
\n\n
Wildcards
\n \n\n\n If the groups wont drop down click here to fix the issue.\n\n
\n\n
WILDCARD_DIR: E:\sd\stable-diffusion-webui\extensions\sd-dynamic-prompts\wildcards
\n You can add more wildcards by creating a text file with one term per line and name is mywildcards.txt. Place it in E:\sd\stable-diffusion-webui\extensions\sd-dynamic-prompts\wildcards.
<folder>/mywildcards
will then become available.\n', 'None', 30, 4, 0, 0, 4, 0.4, True, 32, 1.0, 2.0, 'a painting in', 'style', 'picture frame, portrait photo', None) {} Traceback (most recent call last): File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict output = await app.blocks.process_api( File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api result = await self.call_function(fn_index, inputs, iterator) File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function prediction = await anyio.to_thread.run_sync( File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "E:\sd\stable-diffusion-webui\modules\ui.py", line 443, in update_token_counter tokens, token_count, max_length = max([model_hijack.tokenize(prompt) for prompt in prompts], key=lambda args: args[1]) File "E:\sd\stable-diffusion-webui\modules\ui.py", line 443, in
Steps to reproduce the problem
- Change the checkpoint in the dropdown at the top of the WebUI.
I had the inpainting model loaded last, and now I cannot switch to any other. Then, I cannot generate.
What should have happened?
It should have normally switched it to a checkpoint.
Commit where the problem happens
98947d173e3f1667eba29c904f681047dea9de90
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
--precision full --medvram --no-half --ckpt-dir "C:\SD\models" --listen --enable-insecure-extension-access --xformers --vae-path "C:\SD\moremodels\v1-5-pruned-emaonly.vae.pt" --api --cors-allow-origins=*
Additional information, context and logs
It seems to be related to this PR: #4514
trying to reproduce the problem, what is your checkpoint cache setting set to ?
@R-N do you think we can just remove the restore_base_vae() as you mentioned ?
trying to reproduce the problem, what is your checkpoint cache setting set to ?
@R-N do you think we can just remove the restore_base_vae() as you mentioned ?
I had the cache set to 2. I set it to 0 and that fixed the problem
i think adding back in the and hasattr(model, "sd_checkpoint_info") check would fix the problem. if the model has no sd_checkpoint_info yet, do not try to restore.
honestly i am not sure if the restore is still needed now after we changed the caching a bit.
line 168
if cache_enabled and hasattr(model, "sd_checkpoint_info"):
sd_vae.restore_base_vae(model)
could you test that ?
@R-N do you think we can just remove the restore_base_vae() as you mentioned ?
Yeah that's what I suggested in #4514 . If you have a checkpoint with config file to trigger it, please try it.
But that restore_base_vae call was from when the caching was done at the start of load_model_weights. It was called so that the caching won't cache the separately loaded VAE. Now that the caching is done right after the checkpoint weight was loaded, that restore_base_vae call can just be removed.
i tried both versions and also noticed the strange behavior of the vae drop down, but i saw you did create a pull request for this. i update my fix and just remove the vae restore as a quick fix, later your pr can be merged for a better vae fix.
i tried both versions and also noticed the strange behavior of the vae drop down
Would you mind describing the strange behavior?
@R-N are you on discord ?
for testing i often use checkpoint x/y plotting with multiple checkpoints and same seed and run tem multiple times. The results are not consistent (e.g. one time the vae seems to be applied and onetime not).
e.g. checkpoints: 'animevae,v1-5-pruned-emaonly' and i only have the animevae.vae
also having similar issues, using cached models and swapping between checkpoints. it tends to happen when the SD_VAE option is set to Auto, and specifically when swapping from a model with external .vae.pt file (ie: animefull-final-pruned) to a checkpoint that doesnt have .va.pt (ie: sd-v1-5) :
Traceback (most recent call last):
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict
output = await app.blocks.process_api(
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api
result = await self.call_function(fn_index, inputs, iterator)
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function
prediction = await anyio.to_thread.run_sync(
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\modules\ui.py", line 1662, in <lambda>
fn=lambda value, k=k: run_settings_single(value, key=k),
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\modules\ui.py", line 1504, in run_settings_single
opts.data_labels[key].onchange()
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\webui.py", line 42, in f
res = func(*args, **kwargs)
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\webui.py", line 84, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\modules\sd_models.py", line 285, in reload_model_weights
load_model(checkpoint_info)
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\modules\sd_models.py", line 254, in load_model
load_model_weights(sd_model, checkpoint_info)
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\modules\sd_models.py", line 169, in load_model_weights
sd_vae.restore_base_vae(model)
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\modules\sd_vae.py", line 54, in restore_base_vae
if base_vae is not None and checkpoint_info == model.sd_checkpoint_info:
File "G:\Visions of Chaos\MachineLearning\Text To Image\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1207, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LatentDiffusion' object has no attribute 'sd_checkpoint_info'
i had a different error code when SD_VAE = None : AttributeError: 'NoneType' object has no attribute 'sd_checkpoint_info'
hopefully this new pull by R-N will fix things :)