stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype.
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
Mac M2 Cpu
An error was reported during startup, but the console can be opened. Same error after clicking Generate button
Steps to reproduce the problem
./webui.sh
What should have happened?
no error
Sysinfo
Mac M2
TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype
What browsers do you use to access the UI ?
Google Chrome
Console logs
To create a public link, set `share=True` in `launch()`.
Startup time: 4.8s (import torch: 1.3s, import gradio: 0.4s, setup paths: 0.4s, other imports: 0.5s, load scripts: 0.5s, create ui: 1.3s, gradio launch: 0.1s).
loading stable diffusion model: TypeError
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/threading.py", line 930, in _bootstrap
self._bootstrap_inner()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/threading.py", line 910, in run
self._target(*self._args, **self._kwargs)
File "/Users/***/stable-diffusion-webui/modules/initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "/Users/***/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/Users/***/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
load_model()
File "/Users/***/stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "/Users/***/stable-diffusion-webui/modules/sd_models.py", line 381, in load_model_weights
model.half()
File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 98, in half
return super().half()
File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1001, in half
return self._apply(lambda t: t.half() if t.is_floating_point() else t)
File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
File "/Users/***/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1001, in <lambda>
return self._apply(lambda t: t.half() if t.is_floating_point() else t)
TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype.
Stable diffusion model failed to load
Applying attention optimization: sub-quadratic... done.
Additional information
No response
To create a public link, set share=True
in launch()
.
Startup time: 4.1s (import torch: 1.2s, import gradio: 0.3s, setup paths: 0.4s, initialize shared: 0.9s, other imports: 0.4s, load scripts: 0.3s, initialize extra networks: 0.1s, create ui: 0.3s, gradio launch: 0.1s).
loading stable diffusion model: TypeError
Traceback (most recent call last):
File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1002, in _bootstrap
self._bootstrap_inner()
File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 982, in run
self._target(self._args, self._kwargs)
File "/Users//stable-diffusion-webui/modules/initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "/Users//stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/Users//stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
load_model()
File "/Users//stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "/Users//stable-diffusion-webui/modules/sd_models.py", line 381, in load_model_weights
model.half()
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 98, in half
return super().half()
^^^^^^^^^^^^^^
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1001, in half
return self._apply(lambda t: t.half() if t.is_floating_point() else t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/Users/***/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1001, in
Stable diffusion model failed to load
Applying attention optimization: sub-quadratic... done.
Loading weights [aadddd3d75] from /Users//stable-diffusion-webui/models/Stable-diffusion/deliberate_v3.safetensors
Creating model from config: /Users//stable-diffusion-webui/configs/v1-inference.yaml
Exception in thread Thread-2 (load_model):
Traceback (most recent call last):
File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 982, in run
self._target(self._args, self._kwargs)
File "/Users//stable-diffusion-webui/modules/initialize.py", line 153, in load_model
devices.first_time_calculation()
File "/Users//stable-diffusion-webui/modules/devices.py", line 152, in first_time_calculation
conv2d(x)
TypeError: 'NoneType' object is not callable
loading stable diffusion model: TypeError
Traceback (most recent call last):
File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1002, in _bootstrap
self._bootstrap_inner()
File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, args)
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/utils.py", line 707, in wrapper
response = f(args, kwargs)
File "/Users//stable-diffusion-webui/modules/ui_extra_networks.py", line 392, in pages_html
return refresh()
File "/Users//stable-diffusion-webui/modules/ui_extra_networks.py", line 398, in refresh
pg.refresh()
File "/Users//stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py", line 13, in refresh
sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
File "/Users//stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 255, in load_textual_inversion_embeddings
self.expected_shape = self.get_expected_shape()
File "/Users//stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 154, in get_expected_shape
vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
File "/Users//stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/Users//stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
load_model()
File "/Users//stable-diffusion-webui/modules/sd_models.py", line 626, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "/Users//stable-diffusion-webui/modules/sd_models.py", line 381, in load_model_weights
model.half()
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 98, in half
return super().half()
^^^^^^^^^^^^^^
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1001, in half
return self._apply(lambda t: t.half() if t.is_floating_point() else t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "/Users//stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/Users/*/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1001, in
I also encountered the same problem
Same here - M2 CPU & Mac OS Sonoma Occurs when I try generating an image
Same here, on a Mac Studio M2
Same for me too, has anyone found a solution to this?
Same over here. I assumed because I'm new to this that it was a rookie error, but seems like everyone's having this issue
Any updates?
--disable-model-loading-ram-optimization
@zag13
--disable-model-loading-ram-optimization
This finally helped! Works! Thanks!
Where should I write?
--disable-model-loading-ram-optimization
where should I put this line ?
@zag13
--disable-model-loading-ram-optimization
This finally helped! Works! Thanks!这终于有帮助了!作品!谢谢!
@zag13
--disable-model-loading-ram-optimization
This finally helped! Works! Thanks!
How to use this? I also encountered this error
@zag13
--disable-model-loading-ram-optimization
This finally helped! Works! Thanks!这终于有帮助了!作品!谢谢!
@zag13
--disable-model-loading-ram-optimization
This finally helped! Works! Thanks!
How to use this? I also encountered this error
Command line parameters
@zag13
--disable-model-loading-ram-optimization
This finally helped! Works! Thanks!这终于有帮助了!作品!谢谢!
@zag13
--disable-model-loading-ram-optimization
This finally helped! Works! Thanks!
How to use this? I also encountered this error
Command line parameters
Should I enter this command directly into the terminal?
Problem solved thank you
张升 @.***>于2023年12月26日 周二19:24写道:
@zag13 https://github.com/zag13
--disable-model-loading-ram-optimization
This finally helped! Works! Thanks!这终于有帮助了!作品!谢谢!
@zag13 https://github.com/zag13
--disable-model-loading-ram-optimization
This finally helped! Works! Thanks!
How to use this? I also encountered this error
Command line parameters
— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13575#issuecomment-1869472118, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2DSWRQD3RWAOI43UNJX4A3YLKXVLAVCNFSM6AAAAAA52BRRBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRZGQ3TEMJRHA . You are receiving this because you commented.Message ID: @.***>