stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Mac更新1.6.0后无法使用TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

Open owspace opened this issue 1 year ago • 58 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

启动webui后无法生成图像

Steps to reproduce the problem

1.启动webui 2. 输入提示词 3.开始生成图片

What should have happened?

正常生成图片

Sysinfo

{ "date": "Thu Aug 31 22:36:37 2023", "timestamp": "22:36:52", "uptime": "Thu Aug 31 22:18:16 2023", "version": { "app": "stable-diffusion-webui", "updated": "2023-08-31", "hash": "5ef669de", "url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/master" }, "torch": "2.0.1 autocast half", "gpu": {}, "state": { "started": "Thu Aug 31 22:36:52 2023", "step": "0 / 0", "jobs": "0 / 0", "flags": "", "job": "", "text-info": "" }, "memory": { "ram": { "free": 29.98, "used": 2.02, "total": 32 } }, "optimizations": [ "none" ], "libs": { "xformers": "", "diffusers": "", "transformers": "4.30.2" }, "repos": { "Stable Diffusion": "[cf1d67a] 2023-03-25", "Stable Diffusion XL": "[45c443b] 2023-07-26", "CodeFormer": "[c5b4593] 2022-09-09", "BLIP": "[48211a1] 2022-06-07", "k_diffusion": "[ab527a9] 2023-08-12" }, "device": { "active": "mps", "dtype": "torch.float16", "vae": "torch.float32", "unet": "torch.float16" },

}

What browsers do you use to access the UI ?

Google Chrome

Console logs

Loading weights [0b9e46a0b0] from /Users/alyears/stable-diffusion-webui/models/Stable-diffusion/人物/墨幽人造人_v1040.safetensors
Traceback (most recent call last):
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/alyears/stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "/Users/alyears/stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "/Users/alyears/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "/Users/alyears/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "/Users/alyears/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
Creating model from config: /Users/alyears/stable-diffusion-webui/configs/v1-inference.yaml
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.10.12_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/opt/homebrew/Cellar/[email protected]/3.10.12_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/alyears/stable-diffusion-webui/modules/ui.py", line 1298, in <lambda>
    update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
  File "/Users/alyears/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/alyears/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/alyears/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/alyears/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/alyears/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/alyears/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/alyears/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/alyears/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/alyears/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load

Additional information

No response

owspace avatar Aug 31 '23 14:08 owspace

possibly related https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12526 but maybe not the actual cause as this one explicitly use float32 for MPS

w-e-w avatar Aug 31 '23 16:08 w-e-w

Same here :(

WatashiLoveliver avatar Sep 01 '23 02:09 WatashiLoveliver

Yes, same error (Macbook Pro, M2 Max, 96GB memory): "TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead."

It was working up until this commit.

Console log

################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye) ################################################################

################################################################ Running on nick user ################################################################

################################################################ Repo already cloned, using it as install directory ################################################################

################################################################ python venv already activate or run without venv: /Users/nick/PycharmProjects/stable-diffusion-webui/venv ################################################################

################################################################ Launching launch.py... ################################################################ Python 3.10.13 (main, Aug 24 2023, 22:36:46) [Clang 14.0.3 (clang-1403.0.22.14.1)] Version: v1.6.0 Commit hash: 5ef669de080814067961f28357256e8fe27544f4 Fetching updates for Stable Diffusion XL... Checking out commit for Stable Diffusion XL with hash: 45c443b316737a4ab6e40413d7794a7f5657c19f... Previous HEAD position was 5c10dee Merge branch 'main' of https://github.com/Stability-AI/generative-models into main HEAD is now at 45c443b Fix license-files setting for project (#71) Fetching updates for K-diffusion... Checking out commit for K-diffusion with hash: ab527a9a6d347f364e3d185ba6d714e22d80cb3c... Previous HEAD position was 51c9778 Add PyTorch 1.12.1 MPS workaround for DPM fast HEAD is now at ab527a9 Release 0.0.16 Installing requirements Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. 2023-09-01 08:41:25.298 Python[5358:226581] apply_selection_policy_once: avoid use of removable GPUs (via org.python.python:GPUSelectionPolicy->avoidRemovable) Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled Loading weights [7ed60a2f58] from /Users/nick/PycharmProjects/stable-diffusion-webui/models/Stable-diffusion/juggernaut_aftermath.safetensors Creating model from config: /Users/nick/PycharmProjects/stable-diffusion-webui/models/Stable-diffusion/juggernaut_aftermath.yaml Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 26.1s (prepare environment: 14.8s, import torch: 3.0s, import gradio: 1.5s, setup paths: 1.0s, initialize shared: 1.0s, other imports: 3.8s, load scripts: 0.3s, initialize extra networks: 0.2s, create ui: 0.2s, gradio launch: 0.3s). loading stable diffusion model: TypeError Traceback (most recent call last): File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap self._bootstrap_inner() File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/initialize.py", line 147, in load_model shared.sd_model # noqa: B018 File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model return modules.sd_models.model_data.get_sd_model() File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model load_model() File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model load_model_weights(sd_model, checkpoint_info, state_dict, timer) File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights model.load_state_dict(state_dict, strict=False) File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs)) File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict original(module, state_dict, strict=strict) File "/opt/homebrew/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict load(self, state_dict) File "/opt/homebrew/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load load(child, child_state_dict, child_prefix) File "/opt/homebrew/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load load(child, child_state_dict, child_prefix) File "/opt/homebrew/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load load(child, child_state_dict, child_prefix) [Previous line repeated 3 more times] File "/opt/homebrew/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load module._load_from_state_dict( File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs)) File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad) File "/opt/homebrew/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like return aten.empty_like.default( File "/opt/homebrew/lib/python3.10/site-packages/torch/_ops.py", line 287, in call return self._op(*args, **kwargs or {}) File "/opt/homebrew/lib/python3.10/site-packages/torch/_refs/init.py", line 4254, in empty_like return torch.empty_strided( TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

nicklansley avatar Sep 01 '23 08:09 nicklansley

~I faced the same issue, the following works for me~ ~- Delete webui-maco-env.sh~ ~- Remove export PYTORCH_MPS_HIGH_WATERMARK_RATIO="0.0" in webui-user.sh if exist~ ~- Add this env export COMMANDLINE_ARGS="--skip-torch-cuda-test --no-half --use-cpu all"~

Updated this thread, it was a temporary workaround to force using cpu, you should try @ericwagner101 resolution instead https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12907#issuecomment-1704071970

ngtongsheng avatar Sep 01 '23 11:09 ngtongsheng

FWIW, on v1.6.0-78-gd39440bf --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate works fine on my M2 Mac and is using the GPU with both SD and SDXL models.

akx avatar Sep 01 '23 12:09 akx

Thanks @akx but my console output shows those startup settings are executed (14 lines below "Launching launch.py..." in my comment above).

nicklansley avatar Sep 01 '23 13:09 nicklansley

@ngtongsheng Yes that works BUT now it's generating using the actual CPUs and not Apple's Metal GPU!

In this screenshot of Activity Monitor you can see 'Python' with 338% CPU and 0% GPU, and the CPU and GPU graphs confirming this, during the creation of one 512x512 image that took 3.5 seconds per iteration (not bad for a CPU but overall, very slow!).

Screenshot 2023-09-01 at 14 52 54

nicklansley avatar Sep 01 '23 13:09 nicklansley

@nicklansley Well, for one you're evidently (/opt/homebrew/lib/python3.10/site-packages/torch/nn/modules/module.py is not a virtualenv path) not (fully) using a virtualenv to run the project (which is really the only correct way to run it, unless you're in a container or some other isolation, but you're not), so chances are your global Python environment (/opt/homebrew/lib/python3.10/site-packages) has wrong versions of some packages.

I would honestly recommend uninstalling all extra packages from that global Python environment, and then ensure you're using a virtualenv going forward.

akx avatar Sep 01 '23 14:09 akx

Thanks @akx but I promise you I am using a virtualenv! From the console log above: python venv already activate or run without venv: /Users/nick/PycharmProjects/stable-diffusion-webui/venv

HOWEVER... it may be it is mixing up virtualenv and the global site-packages?

OK a way forward - thank you.

nicklansley avatar Sep 01 '23 14:09 nicklansley

You are using a virtualenv, but you're clearly using a torch from outside that virtualenv. Without a venv activated, run python3.10 -m pip list and then python3.10 -m pip uninstall -y ... the packages that shouldn't be installed globally (practically everything but pip, wheel, setuptools, pretty much).

akx avatar Sep 01 '23 15:09 akx

I wiped everything but the model directory, performed a git clone then let webui.sh create the virtual env and install the packages.

Still the same error but, because i know all the models worked before this commit, I will rollback a couple of weeks and step forward again.

My new run of webui.sh is below, with references to /venv/ packages clearly showing in the execution except for access to /opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py ..which is where homebrew installed the 'master' copy of [email protected]

Perhaps core packages such as threading.py are not copied into the virtual environment when a full path including major.minor.patch (3.10.13) version of Python is used?

Here's the run:

/usr/bin/env bash /Users/nick/PycharmProjects/stable-diffusion-webui/webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on nick user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.13 (main, Aug 24 2023, 22:36:46) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
Installing torch and torchvision
Collecting torch==2.0.1
  Using cached torch-2.0.1-cp310-none-macosx_11_0_arm64.whl (55.8 MB)
Collecting torchvision==0.15.2
  Using cached torchvision-0.15.2-cp310-cp310-macosx_11_0_arm64.whl (1.4 MB)
Collecting filelock (from torch==2.0.1)
  Obtaining dependency information for filelock from https://files.pythonhosted.org/packages/52/90/45223db4e1df30ff14e8aebf9a1bf0222da2e7b49e53692c968f36817812/filelock-3.12.3-py3-none-any.whl.metadata
  Downloading filelock-3.12.3-py3-none-any.whl.metadata (2.7 kB)
Collecting typing-extensions (from torch==2.0.1)
  Obtaining dependency information for typing-extensions from https://files.pythonhosted.org/packages/ec/6b/63cc3df74987c36fe26157ee12e09e8f9db4de771e0f3404263117e75b95/typing_extensions-4.7.1-py3-none-any.whl.metadata
  Downloading typing_extensions-4.7.1-py3-none-any.whl.metadata (3.1 kB)
Collecting sympy (from torch==2.0.1)
  Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting networkx (from torch==2.0.1)
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting jinja2 (from torch==2.0.1)
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting numpy (from torchvision==0.15.2)
  Obtaining dependency information for numpy from https://files.pythonhosted.org/packages/c3/ea/1d95b399078ecaa7b5d791e1fdbb3aee272077d9fd5fb499593c87dec5ea/numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl.metadata
  Downloading numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (5.6 kB)
Collecting requests (from torchvision==0.15.2)
  Obtaining dependency information for requests from https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl.metadata
  Downloading requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision==0.15.2)
  Obtaining dependency information for pillow!=8.3.*,>=5.3.0 from https://files.pythonhosted.org/packages/ef/53/024e161112beb11008d6c7529c954e2ec641ae17b99e03fe9a539e114ae6/Pillow-10.0.0-cp310-cp310-macosx_11_0_arm64.whl.metadata
  Downloading Pillow-10.0.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (9.5 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.0.1)
  Obtaining dependency information for MarkupSafe>=2.0 from https://files.pythonhosted.org/packages/20/1d/713d443799d935f4d26a4f1510c9e61b1d288592fb869845e5cc92a1e055/MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl.metadata
  Downloading MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl.metadata (3.0 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision==0.15.2)
  Obtaining dependency information for charset-normalizer<4,>=2 from https://files.pythonhosted.org/packages/ec/a7/96835706283d63fefbbbb4f119d52f195af00fc747e67cc54397c56312c8/charset_normalizer-3.2.0-cp310-cp310-macosx_11_0_arm64.whl.metadata
  Downloading charset_normalizer-3.2.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (31 kB)
Collecting idna<4,>=2.5 (from requests->torchvision==0.15.2)
  Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision==0.15.2)
  Obtaining dependency information for urllib3<3,>=1.21.1 from https://files.pythonhosted.org/packages/9b/81/62fd61001fa4b9d0df6e31d47ff49cfa9de4af03adecf339c7bc30656b37/urllib3-2.0.4-py3-none-any.whl.metadata
  Downloading urllib3-2.0.4-py3-none-any.whl.metadata (6.6 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision==0.15.2)
  Obtaining dependency information for certifi>=2017.4.17 from https://files.pythonhosted.org/packages/4c/dd/2234eab22353ffc7d94e8d13177aaa050113286e93e7b40eae01fbf7c3d9/certifi-2023.7.22-py3-none-any.whl.metadata
  Downloading certifi-2023.7.22-py3-none-any.whl.metadata (2.2 kB)
Collecting mpmath>=0.19 (from sympy->torch==2.0.1)
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached Pillow-10.0.0-cp310-cp310-macosx_11_0_arm64.whl (3.1 MB)
Using cached filelock-3.12.3-py3-none-any.whl (11 kB)
Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Using cached numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached certifi-2023.7.22-py3-none-any.whl (158 kB)
Using cached charset_normalizer-3.2.0-cp310-cp310-macosx_11_0_arm64.whl (124 kB)
Using cached MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl (17 kB)
Using cached urllib3-2.0.4-py3-none-any.whl (123 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, charset-normalizer, certifi, requests, jinja2, filelock, torch, torchvision
Successfully installed MarkupSafe-2.1.3 certifi-2023.7.22 charset-normalizer-3.2.0 filelock-3.12.3 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.1 numpy-1.25.2 pillow-10.0.0 requests-2.31.0 sympy-1.12 torch-2.0.1 torchvision-0.15.2 typing-extensions-4.7.1 urllib3-2.0.4
Installing clip
Installing open_clip
Installing requirements for CodeFormer
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
2023-09-01 18:18:30.799 Python[13219:522956] apply_selection_policy_once: avoid use of removable GPUs (via org.python.python:GPUSelectionPolicy->avoidRemovable)
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [e948ca5dc4] from /Users/nick/PycharmProjects/stable-diffusion-webui/models/Stable-diffusion/absolutereality_v16.safetensors
Creating model from config: /Users/nick/PycharmProjects/stable-diffusion-webui/configs/v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 85.1s (prepare environment: 55.7s, import torch: 5.9s, import gradio: 5.6s, setup paths: 4.7s, initialize shared: 0.2s, other imports: 12.1s, setup codeformer: 0.1s, load scripts: 0.3s, initialize extra networks: 0.1s, create ui: 0.2s, gradio launch: 0.1s).
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load
Applying attention optimization: sub-quadratic... done.
Loading weights [e948ca5dc4] from /Users/nick/PycharmProjects/stable-diffusion-webui/models/Stable-diffusion/absolutereality_v16.safetensors
Creating model from config: /Users/nick/PycharmProjects/stable-diffusion-webui/configs/v1-inference.yaml
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load
Loading weights [e948ca5dc4] from /Users/nick/PycharmProjects/stable-diffusion-webui/models/Stable-diffusion/absolutereality_v16.safetensors
Creating model from config: /Users/nick/PycharmProjects/stable-diffusion-webui/configs/v1-inference.yaml
Traceback (most recent call last):
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load
Loading weights [e948ca5dc4] from /Users/nick/PycharmProjects/stable-diffusion-webui/models/Stable-diffusion/absolutereality_v16.safetensors
Traceback (most recent call last):
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
Creating model from config: /Users/nick/PycharmProjects/stable-diffusion-webui/configs/v1-inference.yaml
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui.py", line 1298, in <lambda>
    update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load
Loading weights [e948ca5dc4] from /Users/nick/PycharmProjects/stable-diffusion-webui/models/Stable-diffusion/absolutereality_v16.safetensors
Creating model from config: /Users/nick/PycharmProjects/stable-diffusion-webui/configs/v1-inference.yaml
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/ui.py", line 1298, in <lambda>
    update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/nick/PycharmProjects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load

nicklansley avatar Sep 01 '23 17:09 nicklansley

Perhaps core packages such as threading.py are not copied into the virtual environment when a full path including major.minor.patch (3.10.13) version of Python is used?

They should be, and that shouldn't be the issue.

Can you rename your ui-config.json to e.g. ui-config.json.backup and try again, just in case this is an issue with some botched setting? Also, can you try with another model than absolutereality_v16.safetensors?

akx avatar Sep 01 '23 17:09 akx

Perhaps core packages such as threading.py are not copied into the virtual environment when a full path including major.minor.patch (3.10.13) version of Python is used?

They should be, and that shouldn't be the issue.

Can you rename your ui-config.json to e.g. ui-config.json.backup and try again, just in case this is an issue with some botched setting? Also, can you try with another model than absolutereality_v16.safetensors?

I tried that, and new clean install even:

➜ ./webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on vortex user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.4 (v3.10.4:9d38120e33, Mar 23 2022, 17:29:05) [Clang 13.0.0 (clang-1300.0.29.30)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
*** "Disable all extensions" option was set, will not load any extensions ***
Loading weights [4199bcdd14] from /Users/vortex/WebstormProjectsAi/stable-diffusion-webui/models/Stable-diffusion/ani/revAnimated_v122.safetensors
Creating model from config: /Users/vortex/WebstormProjectsAi/stable-diffusion-webui/configs/v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 3.8s (import torch: 1.1s, import gradio: 0.7s, setup paths: 0.3s, initialize shared: 0.2s, other imports: 0.4s, scripts list_optimizers: 0.3s, initialize extra networks: 0.1s, create ui: 0.2s, gradio launch: 0.3s).
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 966, in _bootstrap
    self._bootstrap_inner()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
    self.run()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 946, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load
Applying attention optimization: sub-quadratic... done.
Loading weights [4199bcdd14] from /Users/vortex/WebstormProjectsAi/stable-diffusion-webui/models/Stable-diffusion/ani/revAnimated_v122.safetensors
Creating model from config: /Users/vortex/WebstormProjectsAi/stable-diffusion-webui/configs/v1-inference.yaml
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 966, in _bootstrap
    self._bootstrap_inner()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
    self.run()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load
Loading weights [4199bcdd14] from /Users/vortex/WebstormProjectsAi/stable-diffusion-webui/models/Stable-diffusion/ani/revAnimated_v122.safetensors
Creating model from config: /Users/vortex/WebstormProjectsAi/stable-diffusion-webui/configs/v1-inference.yaml
Traceback (most recent call last):
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 966, in _bootstrap
    self._bootstrap_inner()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
    self.run()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load
Loading weights [4199bcdd14] from /Users/vortex/WebstormProjectsAi/stable-diffusion-webui/models/Stable-diffusion/ani/revAnimated_v122.safetensors
Traceback (most recent call last):
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
Creating model from config: /Users/vortex/WebstormProjectsAi/stable-diffusion-webui/configs/v1-inference.yaml
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 966, in _bootstrap
    self._bootstrap_inner()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
    self.run()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/ui.py", line 1298, in <lambda>
    update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_models.py", line 353, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2027, in load_state_dict
    load(self, state_dict)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2015, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 3 more times]
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2009, in load
    module._load_from_state_dict(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
    conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 1780, in zeros_like
    return aten.empty_like.default(
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
    return self._op(*args, **kwargs or {})
  File "/Users/vortex/WebstormProjectsAi/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4254, in empty_like
    return torch.empty_strided(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.


Stable diffusion model failed to load

0-vortex avatar Sep 01 '23 18:09 0-vortex

Forgot to mention that even with the errors, in the second instance basic generating stuff works but when trying hi-res it goes out of memory like

loc("MM/(batch1@batch2)"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/0783246a-4091-11ee-8fca-aead88ae2785/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":39:0)): error: 'anec.matmul' op Invalid configuration for the following reasons: Tensor dimensions N8D1C512H1W38400 are not within supported range, N[1-65536]D[1-16384]C[1-65536]H[1-16384]W[1-16384].

When it previously wouldn't, that's a 512x768 image I upscaled last week :<

0-vortex avatar Sep 01 '23 18:09 0-vortex

@akx yes that worked! Renaming the ui-config.json to ui-config.json.backup fixed the issue and now it is working fine including using the GPU (and the Macbook fans!). See screenshot of Activity monitor - GPU at 95.8% on first line (Python): Screenshot 2023-09-01 at 20 20 43

nicklansley avatar Sep 01 '23 19:09 nicklansley

@nicklansley Great to hear! It'd be nice to see the backup and the newly generated config so we can figure out what went wrong from the diff. If you don't feel like sharing the files here, you can send them as attachments to (my github username) at iki dot fi.

akx avatar Sep 01 '23 19:09 akx

@akx - I have just performed a 'diff ui-config.json ui-config.json.backup' and there are no differences!

I am still using absolutereality_v16.safetensors and can change to other models and they work too. So, the act of the application not being able to find ui-config.json and having to go through the process of rebuilding it may fix some issue?

Anyway, here is my ui-config.json that works, not forgetting is identical to ui-config.json.backup I have added a .txt extension because of GitHub restrictions:

ui-config.json.txt

nicklansley avatar Sep 01 '23 20:09 nicklansley

@nicklansley Did you by any chance rename he file while webui was running? I'm trying to think of how the files could be identical and still have one work and the other not...

akx avatar Sep 01 '23 20:09 akx

I jump to your discussion :) i am facing same issue as nicklansley mentioned, I have followed your instruction and ensure to use venv only. Stacktrace is very similar with the final message.

TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

I notice that if i switch between two models from the UI dropdown, then it is working

yanis-git avatar Sep 01 '23 20:09 yanis-git

@akx I have done some further testing to understand why renaming the file seemed to solve the problem. I found out that the issue is not related to the file name, but to the first model that the application loads from the models/Stable-diffusion directory.

When I started the application with webui.sh, it automatically loaded the first model in alphabetical order, which was absolutereality_v16.safetensors. This model caused a stream of errors in the terminal, saying that it could not convert a MPS Tensor to float64 dtype (TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. Stable diffusion model failed to load).

However, when I renamed this model to absolutereality_v16.safetensors.bak and restarted webui.sh, it loaded the next model in alphabetical order, which was epicrealism_pureEvolutionV3.safetensors. This model worked fine and did not cause any errors.

I then renamed absolutereality_v16.safetensors.bak back to absolutereality_v16.safetensors and restarted webui.sh again. This time, it did not load this model by default, but remembered the last model I used, which was epicrealism_pureEvolutionV3.safetensors. This model continued to worked fine.

I then switched to absolutereality_v16.safetensors from the UI, without restarting anything, and it also worked fine. No errors were shown in the terminal, demonstrating that there does not seem to be anything wrong with this model to cause any specific error.

However, when I restarted webui.sh again with absolutereality_v16.safetensors as the currently chosen model, the errors reappeared.

I also followed @yanis-git's suggestion and switched between all the different models from the UI, without restarting anything, and they all worked fine.

So, there is a bug in loading the first model on startup, even if that model is compatible with the application. Of course, there may be some specific issue with absolutereality_v16.safetensors but the crucial point is that it has been working fine up until this commit. Has application code changed to perform extra tests to decide the compatibility of models that this model fails on startup? For example, I assume you must detect the SD version (1.4, 1.5, 2.x, XL) of the model and that absolutereality_v16.safetensors is misrepresenting itself in some way? Just some thoughts...

If you want to try the model yourself (Absolute Reality v1.6) you can download it from the CivitAI catalogue here: https://civitai.com/models/81458?modelVersionId=108576 with a note that it has been superseded by v1.8 which does not have this issue as the startup model.

nicklansley avatar Sep 02 '23 08:09 nicklansley

@nicklansley @akx IMHO don't think the reasoning for blaming a checkpoint is correct. In my installations, where I got rid of the errors by just cycling, I never had version 1.6.0 of absolute reality (raw/absolutereality_v181.safetensors in my logs). Furthermore, the MPS errors persist when trying to do hi-res, can't even hi-res a 512x512 on an M1 max, takes 20s/it, while in the 1.5.2 folder that's 6-7s/it. There are breaking changes to the RNG, can't reproduce an image with the same config.json, even tried diffing the 1.5.2 config with the 1.6.0 config to make sure I have everything set up the same way (some settings changed name) and couldn't figure it out. The closest I can get the images to match my previous generations is with the NV RNG, but they still are not the same and the colours are a bit off.

If put to guess, hi-res making 2 passes, how is memory managed there?

0-vortex avatar Sep 02 '23 09:09 0-vortex

@0-vortex thanks for the update - I seem to be running hires OK with the application using just over 6GB of memory for 512x512 then switching up to just over 10GB for hires 1024x1024 (according to Activity Monitor).

If you run Activity Monitor and display the GPU History window when running the app, does this window show the GPU maxxing out during generation, or not used at all? When I followed the advice from @ngtongsheng it worked but only used the CPU and I was getting the same seconds per iteration performance that you experienced (surprisingly good for CPU - all hail RISC architecture!) but way to slow for normal use. So just check your GPU is being engaged.

nicklansley avatar Sep 02 '23 10:09 nicklansley

@0-vortex thanks for the update - I seem to be running hires OK with the application using just over 6GB of memory for 512x512 then switching up to just over 10GB for hires 1024x1024 (according to Activity Monitor).

If you run Activity Monitor and display the GPU History window when running the app, does this window show the GPU maxxing out during generation, or not used at all? When I followed the advice from @ngtongsheng it worked but only used the CPU and I was getting the same seconds per iteration performance that you experienced (surprisingly good for CPU - all hail RISC architecture!) but way to slow for normal use. So just check your GPU is being engaged.

It spikes to 75-80% just generating the 512x512 image for me, 95%+ on hi-res :<

0-vortex avatar Sep 02 '23 14:09 0-vortex

I have the issue that seems the same as this on my Intel mac with GPU, and it seems no effect to rename ui-config.json to ui-config.json.backup. I don't know much about stable diffusion because I just started using stable diffusion today, but the last line says "TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead." "--skip-torch-cuda-test --no-half --use-cpu all" solves my problem, but it takes too long to complete. Hope it will be fixed soon.

soarcreator avatar Sep 02 '23 18:09 soarcreator

I had the same issue and at least for me it was related to a messed up virtualenv. I had to do this:

  1. Clone a new copy of a1111
  2. cd to the new cloned directory.
  3. Run virtualenv venv -p python3 source venv/bin/activate
  4. Then in the same shell window run webui.sh and let it repull all the dependencies.

After this the issue with certain older models went away.

ericwagner101 avatar Sep 03 '23 09:09 ericwagner101

I had the same issue and at least for me it was related to a messed up virtualenv. I had to do this:

  1. Clone a new copy of a1111
  2. cd to the new cloned directory.
  3. Run virtualenv venv -p python3 source venv/bin/activate
  4. Then in the same shell window run webui.sh and let it repull all the dependencies.

After this the issue with certain older models went away.

@ericwagner101 Thank you very much, tried on my macbook pro 13inch M1 chip and confirmed it works.

512x512 took 38secs, not too bad

ngtongsheng avatar Sep 03 '23 17:09 ngtongsheng

I faced the same issue, the following works for me

  • Delete webui-maco-env.sh
  • Remove export PYTORCH_MPS_HIGH_WATERMARK_RATIO="0.0" in webui-user.sh if exist
  • Add this env export COMMANDLINE_ARGS="--skip-torch-cuda-test --no-half --use-cpu all"

works for me.

njulhy avatar Sep 04 '23 13:09 njulhy

use-cpu all will make the webui not use MPS, which is quite certainly not what you want.

akx avatar Sep 04 '23 13:09 akx

I had the same issue and at least for me it was related to a messed up virtualenv. I had to do this:

  1. Clone a new copy of a1111
  2. cd to the new cloned directory.
  3. Run virtualenv venv -p python3 source venv/bin/activate
  4. Then in the same shell window run webui.sh and let it repull all the dependencies.

After this the issue with certain older models went away.

@ericwagner101 Thank you very much, tried on my macbook pro 13inch M1 chip and confirmed it works.

512x512 took 38secs, not too bad

@ngtongsheng Yikes that sounds slow - I know I'm on an M2 but I get 512x512 created in 6 seconds - now M2 is fast but not 6 times as fast as M1!

Check your Activity Monitor to ensure the GPU is being used - it should be close to 100% during image generation. If very low or zero, then your M1's CPU cores are doing the job instead.

nicklansley avatar Sep 04 '23 16:09 nicklansley

As said above, (on my) M2 Max, --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate works and uses MPS.

akx avatar Sep 04 '23 16:09 akx