stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[SD2.0 Bug]: AttributeError: 'NoneType' object has no attribute 'items' and yaml.scanner.ScannerError: mapping values are not allowed here
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
I have tried to get SD 2.0 up and running on several forks and even tried fresh installs and waiting on several updates for the main branch from AUTOMATIC1111 since SD2.0 came out and I'm still seeing these errors.
The webui launches fine, so long as there's a SD 1.x ckpt in the models folder, but when switching to the 768 2.0 model, it throws this error:
venv "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: bb11bee22ab02aa2fb5b96baa9be8103fff19e6a Installing requirements for Web UI Launching Web UI with arguments: --xformers LatentInpaintDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.54 M params. Loading weights [3e16efc8] from A:\Desktop\00 AI Images\stable-diffusion-webui\models\Stable-diffusion\00sd-v1-5-inpainting.ckpt Applying xformers cross attention optimization. Model loaded. Loaded a total of 0 textual inversion embeddings. Embeddings: Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Loading config from: A:\Desktop\00 AI Images\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Error verifying pickled file from C:\Users\Patrick.cache\huggingface\hub\models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K\snapshots\58a1e03a7acfacbe6b95ebc24ae0394eda6a14fc\open_clip_pytorch_model.bin:
Traceback (most recent call last):
File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\safe.py", line 83, in check_pt
with zipfile.ZipFile(filename) as z:
File "C:\Users\Patrick\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1267, in init
self._RealGetContents()
File "C:\Users\Patrick\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1334, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\safe.py", line 131, in load_with_extra check_pt(filename, extra_handler) File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\safe.py", line 98, in check_pt unpickler.load() _pickle.UnpicklingError: persistent IDs in protocol 0 must be ASCII strings
-----> !!!! The file is most likely corrupted !!!! <----- You can skip this check with --disable-safe-unpickle commandline argument, but that is not going to help you.
Traceback (most recent call last):
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict
output = await app.blocks.process_api(
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api
result = await self.call_function(fn_index, inputs, iterator)
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function
prediction = await anyio.to_thread.run_sync(
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\ui.py", line 1664, in
And I know technically (according to the wiki) the 512 model isn't supported yet, but I have its yaml file in there and when i try to load it up, I get this error:
To create a public link, set share=True in launch().
Loading config from: A:\Desktop\00 AI Images\stable-diffusion-webui\models\Stable-diffusion\512-base-ema.yaml
Traceback (most recent call last):
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict
output = await app.blocks.process_api(
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api
result = await self.call_function(fn_index, inputs, iterator)
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function
prediction = await anyio.to_thread.run_sync(
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "A:\Desktop\00 AI Images\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "A:\Desktop\00 AI Images\stable-diffusion-webui\modules\ui.py", line 1664, in
Steps to reproduce the problem
- launch webui-user.bat
- go to http://127.0.0.1:7860
- In the "Stable Diffusion Checkpoint" dropdown box, select the 768-v-ema option
What should have happened?
The model should have switched successfully and not errored out.
Commit where the problem happens
bb11bee22ab02aa2fb5b96baa9be8103fff19e6a
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
xformers, share, some others, but I've also tried it without any and with various combinations
Additional information, context and logs
For what it's worth, I'm running a RTX 2060 Super with 8gb of VRAM on Windows with the latest update of nvidia drivers on Windows 11 OS Build 22623.891 and Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz 3.60 GHz with 64.0 GB of RAM
same
Couple possible issues as to why, check each one.
- the .yaml must have the same name as the model. i.e. 768-v-ema.ckpt needs 768-v-ema.yaml and they both need to be in your "...\stable-diffusion-webui\models\Stable-diffusion" folder
- make sure the .yaml you downloaded isn't scuffed. go here, click v2-inference-v.yaml, then right-click the button that says "raw" and "save link as" 768-v-ema.yaml. DO NOT just right-click v2-inference-v.yaml and "save as" directly from the link, it will result in a corrupted download. Alternatively, just download the Stability-AI/stablediffusion master branch from their main page and pull the .yaml that you need from the downloaded files
- make sure the model itself isn't scuffed, SHA256 should be bfcaf0755797b0c30eb00a3787e8b423eb1f5decd8de76c4d824ac2dd27e139f. Check yours. If it isn't then your model is broken and you need to redownload it.
this worked for me, thank you
I did everything you wrote. Didn't work for me :(
Couple possible issues as to why, check each one.
- the .yaml must have the same name as the model. i.e. 768-v-ema.ckpt needs 768-v-ema.yaml and they both need to be in your "...\stable-diffusion-webui\models\Stable-diffusion" folder
- make sure the .yaml you downloaded isn't scuffed. go here, click v2-inference-v.yaml, then right-click the button that says "raw" and "save link as" 768-v-ema.yaml. DO NOT just right-click v2-inference-v.yaml and "save as" directly from the link, it will result in a corrupted download. Alternatively, just download the Stability-AI/stablediffusion master branch from their main page and pull the .yaml that you need from the downloaded files
- make sure the model itself isn't scuffed, SHA256 should be bfcaf0755797b0c30eb00a3787e8b423eb1f5decd8de76c4d824ac2dd27e139f. Check yours. If it isn't then your model is broken and you need to redownload it.
No dice, tried several of the ways to get the yaml file that you mentioned, and checked the hash of my model file, I'm still getting the error for the 768 model. I haven't attempted anything new for the 512 model.
Got the same issue.
Error verifying pickled file from C:\Users\Patrick.cache\huggingface\hub\models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K\snapshots\58a1e03a7acfacbe6b95ebc24ae0394eda6a14fc\open_clip_pytorch_model.bin:
I deleted the C:\Users.cache\huggingface\hub\models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K folder, running the webui-user.bat re-download it. This works for me.
Holy cow!! That was it!! That at least fixed the 768 model issue.
Aaaand pulling the yaml file from the zip of the stable diffusion GitHub code fixed the 512 model issue. That's a wrap, boys. Good work!