stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: RuntimeError: Input type (float) and bias type (c10::Half) should be the same
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
I am using control net new reference only preprocessor and it gives this error: RuntimeError: Input type (float) and bias type (c10::Half) should be the same
Steps to reproduce the problem
- Go to control net and choose reference only in the preprocessor and add your prompt
- Press generate
What should have happened?
It should have generated the image same to the image added in the control net
Commit where the problem happens
b08500cec8a791ef20082628b49b17df833f5dda
What platforms do you use to access the UI ?
MacOS
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
Np
List of extensions
dreambooth, controlnet,
Console logs
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on aniketsharma user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.11 (main, Apr 7 2023, 07:24:53) [Clang 14.0.0 (clang-1400.0.29.202)]
Version: v1.2.0
Commit hash: b08500cec8a791ef20082628b49b17df833f5dda
Installing requirements
Installing sd-webui-controlnet requirement: svglib
Couldn't install sd-webui-controlnet requirement: svglib.
Command: "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/bin/python3.10" -m pip install svglib --prefer-binary
Error code: 1
stdout: Collecting svglib
Using cached svglib-1.5.1-py3-none-any.whl
Collecting reportlab (from svglib)
Using cached reportlab-4.0.0-py3-none-any.whl (1.9 MB)
Collecting lxml (from svglib)
Using cached lxml-4.9.2-cp310-cp310-macosx_13_0_arm64.whl
Collecting tinycss2>=0.6.0 (from svglib)
Using cached tinycss2-1.2.1-py3-none-any.whl (21 kB)
Collecting cssselect2>=0.2.0 (from svglib)
Using cached cssselect2-0.7.0-py3-none-any.whl (15 kB)
Collecting webencodings (from cssselect2>=0.2.0->svglib)
Using cached webencodings-0.5.1-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: pillow>=9.0.0 in ./venv/lib/python3.10/site-packages (from reportlab->svglib) (9.4.0)
Collecting rlPyCairo<1,>=0.2.0 (from reportlab->svglib)
Using cached rlPyCairo-0.2.0-py3-none-any.whl (10 kB)
Collecting freetype-py<2.4,>=2.3.0 (from reportlab->svglib)
Using cached freetype_py-2.3.0-py3-none-macosx_10_9_universal2.whl (1.7 MB)
Collecting pycairo>=1.20.0 (from rlPyCairo<1,>=0.2.0->reportlab->svglib)
Using cached pycairo-1.23.0.tar.gz (344 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Building wheels for collected packages: pycairo
Building wheel for pycairo (pyproject.toml): started
Building wheel for pycairo (pyproject.toml): finished with status 'error'
Failed to build pycairo
stderr: error: subprocess-exited-with-error
× Building wheel for pycairo (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [15 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-13-arm64-cpython-310
creating build/lib.macosx-13-arm64-cpython-310/cairo
copying cairo/__init__.py -> build/lib.macosx-13-arm64-cpython-310/cairo
copying cairo/__init__.pyi -> build/lib.macosx-13-arm64-cpython-310/cairo
copying cairo/py.typed -> build/lib.macosx-13-arm64-cpython-310/cairo
running build_ext
Package cairo was not found in the pkg-config search path.
Perhaps you should add the directory containing `cairo.pc'
to the PKG_CONFIG_PATH environment variable
No package 'cairo' found
Command '['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pycairo
ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
Warning: Failed to install svglib, some preprocessors may not work.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
If submitting an issue on github, please provide the full startup log for debugging purposes.
Initializing Dreambooth
Dreambooth revision: 926ae204ef5de17efca2059c334b6098492a0641
Successfully installed fastapi-0.94.1 gitpython-3.1.31 requests-2.30.0 transformers-4.26.1
Does your project take forever to startup?
Repetitive dependency installation may be the reason.
Automatic1111's base project sets strict requirements on outdated dependencies.
If an extension is using a newer version, the dependency is uninstalled and reinstalled twice every startup.
Successfully installed MarkupSafe-2.1.2 filelock-3.12.0 jinja2-3.1.2 mpmath-1.3.0 mypy-extensions-1.0.0 networkx-3.1 numpy-1.24.3 pyre-extensions-0.0.23 sympy-1.12 torch-2.0.1 typing-extensions-4.5.0 typing-inspect-0.8.0 xformers-0.0.16
[!] xformers version 0.0.16 installed.
[+] torch version 2.0.1 installed.
[+] torchvision version 0.14.1 installed.
[+] accelerate version 0.18.0 installed.
[+] diffusers version 0.14.0 installed.
[+] transformers version 4.26.1 installed.
[+] bitsandbytes version 0.35.4 installed.
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: dlopen(/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torchvision/image.so, 0x0006): Symbol not found: __ZN3c106detail19maybe_wrap_dim_slowExxb
Referenced from: <0B637046-A38B-3A5C-80C6-E847C27DCCD5> /Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torchvision/image.so
Expected in: <3F9923D2-81A5-3EC8-9739-EC0C1C816132> /Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/lib/libc10.dylib
warn(f"Failed to load image Python extension: {e}")
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ControlNet v1.1.166
ControlNet v1.1.166
Loading weights [0f49d1caa2] from /Users/aniketsharma/Documents/fakes/stable-diffusion-webui/models/Stable-diffusion/perfectWorld_v3Baked.safetensors
Creating model from config: /Users/aniketsharma/Documents/fakes/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (InvokeAI).
Textual inversion embeddings loaded(24): alexandradaddario, AnaD, awaitingtongue, bad-artist-anime, bad-hands-5, bad-picture-chill-75v, bad_prompt, bad_prompt_version2, badhandv4, BiL13L1SH, bukkakAI, By bad artist -neg, corneo_paizuri, corneo_runny_makeup, corneo_side_deepthroat, corneo_side_doggy, easynegative, emb-anada, jenn1f1850, ng_deepnegative_v1_75t, nudes, paradoxical-back-view-bj, thelab-ada, verybadimagenegative_v1.3
Textual inversion embeddings skipped(1): badquality
Model loaded in 8.9s (load weights from disk: 0.2s, create model: 0.5s, apply weights to model: 5.8s, apply half(): 1.7s, move model to device: 0.5s).
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 14.5s (import torch: 2.4s, import gradio: 0.4s, import ldm: 0.2s, other imports: 0.7s, load scripts: 10.0s, create ui: 0.7s).
100%|██████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:32<00:00, 1.62s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████████| 20/20 [00:31<00:00, 1.59s/it]
Loading preprocessor: reference_only███████████████████████████████████████████████████████| 20/20 [00:31<00:00, 1.60s/it]
preprocessor resolution = 512
0%| | 0/20 [00:00<?, ?it/s]ControlNet used torch.float32 VAE to encode torch.Size([1, 4, 98, 64]).
0%| | 0/20 [00:02<?, ?it/s]
Error completing request
Arguments: ('task(7bqpn8a7giw8unf)', '((nsfw)), portrait, ultra realistic 8k cg, picture-perfect face, flawless, clean, masterpiece, professional artwork, famous artwork, cinematic lighting, cinematic bloom, perfect face, beautiful face, beautiful eyes, ((perfect female body, narrow waist)), masterpiece, ', '(blurry image:1.1), (blurry picture:1.1), (worst quality, low quality:1.4), (out of frame), duplication, (folds:1.7), lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, (extra arms), (extra legs), fused fingers, too many fingers, long neck, username, watermark, signature, monochrome, deformed legs, see-through, ng_deepnegative_v1_75t', [], 20, 15, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 784, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, <controlnet.py.UiControlNetUnit object at 0x2c8f24400>, <controlnet.py.UiControlNetUnit object at 0x2c8f275e0>, <controlnet.py.UiControlNetUnit object at 0x2c8f37880>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/processing.py", line 526, in process_images
res = process_images_inner(p)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/processing.py", line 680, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/processing.py", line 907, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 251, in launch_sampling
return func()
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 154, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 114, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 140, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/sd_hijack_utils.py", line 26, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/modules/sd_hijack_unet.py", line 45, in apply_model
return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs).float()
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 402, in forward_webui
return forward(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 362, in forward
outer.original_forward(x=ref_xt, timesteps=timesteps, context=context)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 807, in forward
return self.out(h)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 372, in lora_Conv2d_forward
return torch.nn.Conv2d_forward_before_lora(self, input)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/extensions/a1111-sd-webui-lycoris/lycoris.py", line 746, in lyco_Conv2d_forward
return torch.nn.Conv2d_forward_before_lyco(self, input)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/Users/aniketsharma/Documents/fakes/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
Additional information
No response
Same issue Using Mac with M1
the same with me
Same here. M1
Use --no-half
parameter.
Use
--no-half
parameter.
where and how, can you please tell
You can directly enable or edit the startup file.
Try ./webui.sh --no-half
to run webui.
I am still having this problem after starting up with no half. Anyone else? Any other solutions?
I am still having this problem after starting up with no half. Anyone else? Any other solutions?
Try to update Controlnet, it would solve this issue
This what work for me! I am using 3060ti 8gb set COMMANDLINE_ARGS= --xformers --api --medvram --no-half-vae