stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[SD2.0 Bug]: AttributeError: 'NoneType' object has no attribute 'items' and yaml.scanner.ScannerError: mapping values are not allowed here

Open patrickmac110 opened this issue 2 years ago • 4 comments
trafficstars

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

I have tried to get SD 2.0 up and running on several forks and even tried fresh installs and waiting on several updates for the main branch from AUTOMATIC1111 since SD2.0 came out and I'm still seeing these errors.

The webui launches fine, so long as there's a SD 1.x ckpt in the models folder, but when switching to the 768 2.0 model, it throws this error:

venv "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: bb11bee22ab02aa2fb5b96baa9be8103fff19e6a Installing requirements for Web UI Launching Web UI with arguments: --xformers LatentInpaintDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.54 M params. Loading weights [3e16efc8] from A:\Desktop\00 AI Images\stable-diffusion-webui\models\Stable-diffusion\00sd-v1-5-inpainting.ckpt Applying xformers cross attention optimization. Model loaded. Loaded a total of 0 textual inversion embeddings. Embeddings: Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Loading config from: A:\Desktop\00 AI Images\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.yaml LatentDiffusion: Running in v-prediction mode DiffusionWrapper has 865.91 M params. Error verifying pickled file from C:\Users\Patrick.cache\huggingface\hub\models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K\snapshots\58a1e03a7acfacbe6b95ebc24ae0394eda6a14fc\open_clip_pytorch_model.bin: Traceback (most recent call last): File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\safe.py", line 83, in check_pt with zipfile.ZipFile(filename) as z: File "C:\Users\Patrick\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1267, in init self._RealGetContents() File "C:\Users\Patrick\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1334, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\safe.py", line 131, in load_with_extra check_pt(filename, extra_handler) File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\safe.py", line 98, in check_pt unpickler.load() _pickle.UnpicklingError: persistent IDs in protocol 0 must be ASCII strings

-----> !!!! The file is most likely corrupted !!!! <----- You can skip this check with --disable-safe-unpickle commandline argument, but that is not going to help you.

Traceback (most recent call last): File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict output = await app.blocks.process_api( File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api result = await self.call_function(fn_index, inputs, iterator) File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function prediction = await anyio.to_thread.run_sync( File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\ui.py", line 1664, in fn=lambda value, k=k: run_settings_single(value, key=k), File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\ui.py", line 1505, in run_settings_single if not opts.set(key, value): File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\shared.py", line 477, in set self.data_labels[key].onchange() File "A:\Desktop\00 AI Images\stable-diffusion-webui\webui.py", line 45, in f res = func(*args, **kwargs) File "A:\Desktop\00 AI Images\stable-diffusion-webui\webui.py", line 87, in shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights())) File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\sd_models.py", line 292, in reload_model_weights load_model(checkpoint_info) File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\sd_models.py", line 260, in load_model sd_model = instantiate_from_config(sd_config.model) File "A:\Desktop\00 AI Images\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "A:\Desktop\00 AI Images\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init self.instantiate_cond_stage(cond_stage_config) File "A:\Desktop\00 AI Images\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage model = instantiate_from_config(config) File "A:\Desktop\00 AI Images\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "A:\Desktop\00 AI Images\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 147, in init model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version) File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 201, in create_model_and_transforms model = create_model( File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 165, in create_model load_checkpoint(model, checkpoint_path) File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 91, in load_checkpoint state_dict = load_state_dict(checkpoint_path) File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 85, in load_state_dict if next(iter(state_dict.items()))[0].startswith('module'): AttributeError: 'NoneType' object has no attribute 'items'

And I know technically (according to the wiki) the 512 model isn't supported yet, but I have its yaml file in there and when i try to load it up, I get this error:

To create a public link, set share=True in launch(). Loading config from: A:\Desktop\00 AI Images\stable-diffusion-webui\models\Stable-diffusion\512-base-ema.yaml Traceback (most recent call last): File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict output = await app.blocks.process_api( File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api result = await self.call_function(fn_index, inputs, iterator) File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function prediction = await anyio.to_thread.run_sync( File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\ui.py", line 1664, in fn=lambda value, k=k: run_settings_single(value, key=k), File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\ui.py", line 1505, in run_settings_single if not opts.set(key, value): File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\shared.py", line 477, in set self.data_labels[key].onchange() File "A:\Desktop\00 AI Images\stable-diffusion-webui\webui.py", line 45, in f res = func(*args, **kwargs) File "A:\Desktop\00 AI Images\stable-diffusion-webui\webui.py", line 87, in shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights())) File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\sd_models.py", line 292, in reload_model_weights load_model(checkpoint_info) File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\sd_models.py", line 243, in load_model sd_config = OmegaConf.load(checkpoint_info.config) File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\omegaconf\omegaconf.py", line 188, in load obj = yaml.load(f, Loader=get_yaml_loader()) File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\yaml_init_.py", line 81, in load return loader.get_single_data() File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\yaml\constructor.py", line 49, in get_single_data node = self.get_single_node() File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\yaml\composer.py", line 36, in get_single_node document = self.compose_document() File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\yaml\composer.py", line 58, in compose_document self.get_event() File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\yaml\parser.py", line 118, in get_event self.current_event = self.state() File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\yaml\parser.py", line 193, in parse_document_end token = self.peek_token() File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\yaml\scanner.py", line 129, in peek_token self.fetch_more_tokens() File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\yaml\scanner.py", line 223, in fetch_more_tokens return self.fetch_value() File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\yaml\scanner.py", line 577, in fetch_value raise ScannerError(None, None, yaml.scanner.ScannerError: mapping values are not allowed here in "A:\Desktop\00 AI Images\stable-diffusion-webui\models\Stable-diffusion\512-base-ema.yaml", line 28, column 66

Steps to reproduce the problem

  1. launch webui-user.bat
  2. go to http://127.0.0.1:7860
  3. In the "Stable Diffusion Checkpoint" dropdown box, select the 768-v-ema option

What should have happened?

The model should have switched successfully and not errored out.

Commit where the problem happens

bb11bee22ab02aa2fb5b96baa9be8103fff19e6a

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

xformers, share, some others, but I've also tried it without any and with various combinations

Additional information, context and logs

For what it's worth, I'm running a RTX 2060 Super with 8gb of VRAM on Windows with the latest update of nvidia drivers on Windows 11 OS Build 22623.891 and Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz 3.60 GHz with 64.0 GB of RAM

patrickmac110 avatar Nov 28 '22 03:11 patrickmac110

same

1NFERR avatar Nov 28 '22 05:11 1NFERR

Couple possible issues as to why, check each one.

  1. the .yaml must have the same name as the model. i.e. 768-v-ema.ckpt needs 768-v-ema.yaml and they both need to be in your "...\stable-diffusion-webui\models\Stable-diffusion" folder
  2. make sure the .yaml you downloaded isn't scuffed. go here, click v2-inference-v.yaml, then right-click the button that says "raw" and "save link as" 768-v-ema.yaml. DO NOT just right-click v2-inference-v.yaml and "save as" directly from the link, it will result in a corrupted download. Alternatively, just download the Stability-AI/stablediffusion master branch from their main page and pull the .yaml that you need from the downloaded files
  3. make sure the model itself isn't scuffed, SHA256 should be bfcaf0755797b0c30eb00a3787e8b423eb1f5decd8de76c4d824ac2dd27e139f. Check yours. If it isn't then your model is broken and you need to redownload it.

skdursh avatar Nov 28 '22 20:11 skdursh

this worked for me, thank you

psdwizzard avatar Nov 28 '22 22:11 psdwizzard

I did everything you wrote. Didn't work for me :(

1NFERR avatar Nov 29 '22 04:11 1NFERR

Couple possible issues as to why, check each one.

  1. the .yaml must have the same name as the model. i.e. 768-v-ema.ckpt needs 768-v-ema.yaml and they both need to be in your "...\stable-diffusion-webui\models\Stable-diffusion" folder
  2. make sure the .yaml you downloaded isn't scuffed. go here, click v2-inference-v.yaml, then right-click the button that says "raw" and "save link as" 768-v-ema.yaml. DO NOT just right-click v2-inference-v.yaml and "save as" directly from the link, it will result in a corrupted download. Alternatively, just download the Stability-AI/stablediffusion master branch from their main page and pull the .yaml that you need from the downloaded files
  3. make sure the model itself isn't scuffed, SHA256 should be bfcaf0755797b0c30eb00a3787e8b423eb1f5decd8de76c4d824ac2dd27e139f. Check yours. If it isn't then your model is broken and you need to redownload it.

No dice, tried several of the ways to get the yaml file that you mentioned, and checked the hash of my model file, I'm still getting the error for the 768 model. I haven't attempted anything new for the 512 model.

patrickmac110 avatar Nov 29 '22 22:11 patrickmac110

Got the same issue.

Error verifying pickled file from C:\Users\Patrick.cache\huggingface\hub\models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K\snapshots\58a1e03a7acfacbe6b95ebc24ae0394eda6a14fc\open_clip_pytorch_model.bin:

I deleted the C:\Users.cache\huggingface\hub\models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K folder, running the webui-user.bat re-download it. This works for me.

ianlamth avatar Dec 01 '22 06:12 ianlamth

Holy cow!! That was it!! That at least fixed the 768 model issue.

patrickmac110 avatar Dec 01 '22 13:12 patrickmac110

Aaaand pulling the yaml file from the zip of the stable diffusion GitHub code fixed the 512 model issue. That's a wrap, boys. Good work!

patrickmac110 avatar Dec 01 '22 15:12 patrickmac110