stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype.

Open noahark opened this issue 1 year ago • 15 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

Mac M2 Cpu

An error was reported during startup, but the console can be opened. Same error after clicking Generate button

Steps to reproduce the problem

./webui.sh

What should have happened?

no error

Sysinfo

Mac M2

TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype

What browsers do you use to access the UI ?

Google Chrome

Console logs

To create a public link, set `share=True` in `launch()`.
Startup time: 4.8s (import torch: 1.3s, import gradio: 0.4s, setup paths: 0.4s, other imports: 0.5s, load scripts: 0.5s, create ui: 1.3s, gradio launch: 0.1s).
loading stable diffusion model: TypeError
Traceback (most recent call last):
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/threading.py", line 930, in _bootstrap
    self._bootstrap_inner()
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/threading.py", line 973, in _bootstrap_inner
    self.run()
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/threading.py", line 910, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/***/stable-diffusion-webui/modules/initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "/Users/***/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/***/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/Users/***/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/Users/***/stable-diffusion-webui/modules/sd_models.py", line 381, in load_model_weights
    model.half()
  File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 98, in half
    return super().half()
  File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1001, in half
    return self._apply(lambda t: t.half() if t.is_floating_point() else t)
  File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 820, in _apply
    param_applied = fn(param)
  File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1001, in <lambda>
    return self._apply(lambda t: t.half() if t.is_floating_point() else t)
TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype.


Stable diffusion model failed to load
Applying attention optimization: sub-quadratic... done.

Additional information

No response

noahark avatar Oct 10 '23 09:10 noahark

To create a public link, set share=True in launch(). Startup time: 4.1s (import torch: 1.2s, import gradio: 0.3s, setup paths: 0.4s, initialize shared: 0.9s, other imports: 0.4s, load scripts: 0.3s, initialize extra networks: 0.1s, create ui: 0.3s, gradio launch: 0.1s). loading stable diffusion model: TypeError Traceback (most recent call last): File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1002, in _bootstrap self._bootstrap_inner() File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1045, in _bootstrap_inner self.run() File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 982, in run self._target(self._args, self._kwargs) File "/Users//stable-diffusion-webui/modules/initialize.py", line 147, in load_model shared.sd_model # noqa: B018 File "/Users//stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model return modules.sd_models.model_data.get_sd_model() File "/Users//stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model load_model() File "/Users//stable-diffusion-webui/modules/sd_models.py", line 626, in load_model load_model_weights(sd_model, checkpoint_info, state_dict, timer) File "/Users//stable-diffusion-webui/modules/sd_models.py", line 381, in load_model_weights model.half() File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 98, in half return super().half() ^^^^^^^^^^^^^^ File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1001, in half return self._apply(lambda t: t.half() if t.is_floating_point() else t) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) ^^^^^^^^^ File "/Users/***/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1001, in return self._apply(lambda t: t.half() if t.is_floating_point() else t) ^^^^^^^^ TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype.

Stable diffusion model failed to load Applying attention optimization: sub-quadratic... done. Loading weights [aadddd3d75] from /Users//stable-diffusion-webui/models/Stable-diffusion/deliberate_v3.safetensors Creating model from config: /Users//stable-diffusion-webui/configs/v1-inference.yaml Exception in thread Thread-2 (load_model): Traceback (most recent call last): File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1045, in _bootstrap_inner self.run() File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 982, in run self._target(self._args, self._kwargs) File "/Users//stable-diffusion-webui/modules/initialize.py", line 153, in load_model devices.first_time_calculation() File "/Users//stable-diffusion-webui/modules/devices.py", line 152, in first_time_calculation conv2d(x) TypeError: 'NoneType' object is not callable loading stable diffusion model: TypeError Traceback (most recent call last): File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1002, in _bootstrap self._bootstrap_inner() File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1045, in _bootstrap_inner self.run() File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, args) File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/utils.py", line 707, in wrapper response = f(args, kwargs) File "/Users//stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html return refresh() File "/Users//stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh pg.refresh() File "/Users//stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True) File "/Users//stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings self.expected_shape = self.get_expected_shape() File "/Users//stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1) File "/Users//stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model return modules.sd_models.model_data.get_sd_model() File "/Users//stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model load_model() File "/Users//stable-diffusion-webui/modules/sd_models.py", line 626, in load_model load_model_weights(sd_model, checkpoint_info, state_dict, timer) File "/Users//stable-diffusion-webui/modules/sd_models.py", line 381, in load_model_weights model.half() File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 98, in half return super().half() ^^^^^^^^^^^^^^ File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1001, in half return self._apply(lambda t: t.half() if t.is_floating_point() else t) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) ^^^^^^^^^ File "/Users/*/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1001, in return self._apply(lambda t: t.half() if t.is_floating_point() else t) ^^^^^^^^ TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype.

noahark avatar Oct 10 '23 11:10 noahark

I also encountered the same problem

rex823 avatar Oct 16 '23 07:10 rex823

Same here - M2 CPU & Mac OS Sonoma Occurs when I try generating an image

okamietovolk avatar Oct 18 '23 02:10 okamietovolk

Same here, on a Mac Studio M2

4ld3v avatar Oct 20 '23 11:10 4ld3v

Same for me too, has anyone found a solution to this?

Arthurofox avatar Oct 27 '23 09:10 Arthurofox

Same over here. I assumed because I'm new to this that it was a rookie error, but seems like everyone's having this issue

xjulietxlizx avatar Oct 28 '23 16:10 xjulietxlizx

Any updates?

Kingmidas74 avatar Nov 09 '23 12:11 Kingmidas74

--disable-model-loading-ram-optimization

zag13 avatar Nov 13 '23 07:11 zag13

@zag13

--disable-model-loading-ram-optimization

This finally helped! Works! Thanks!

okamietovolk avatar Nov 15 '23 05:11 okamietovolk

Where should I write?

IEROA7 avatar Dec 02 '23 19:12 IEROA7

--disable-model-loading-ram-optimization

where should I put this line ?

ypxie avatar Dec 16 '23 18:12 ypxie

@zag13

--disable-model-loading-ram-optimization

This finally helped! Works! Thanks!这终于有帮助了!作品!谢谢!

@zag13

--disable-model-loading-ram-optimization

This finally helped! Works! Thanks!

How to use this? I also encountered this error

manwallet avatar Dec 26 '23 11:12 manwallet

@zag13

--disable-model-loading-ram-optimization

This finally helped! Works! Thanks!这终于有帮助了!作品!谢谢!

@zag13

--disable-model-loading-ram-optimization

This finally helped! Works! Thanks!

How to use this? I also encountered this error

Command line parameters

zag13 avatar Dec 26 '23 11:12 zag13

@zag13

--disable-model-loading-ram-optimization

This finally helped! Works! Thanks!这终于有帮助了!作品!谢谢!

@zag13

--disable-model-loading-ram-optimization

This finally helped! Works! Thanks!

How to use this? I also encountered this error

Command line parameters

Should I enter this command directly into the terminal?

manwallet avatar Dec 26 '23 11:12 manwallet

Problem solved thank you

张升 @.***>于2023年12月26日 周二19:24写道:

@zag13 https://github.com/zag13

--disable-model-loading-ram-optimization

This finally helped! Works! Thanks!这终于有帮助了!作品!谢谢!

@zag13 https://github.com/zag13

--disable-model-loading-ram-optimization

This finally helped! Works! Thanks!

How to use this? I also encountered this error

Command line parameters

— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13575#issuecomment-1869472118, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2DSWRQD3RWAOI43UNJX4A3YLKXVLAVCNFSM6AAAAAA52BRRBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRZGQ3TEMJRHA . You are receiving this because you commented.Message ID: @.***>

manwallet avatar Dec 26 '23 11:12 manwallet