sd-webui-controlnet
sd-webui-controlnet copied to clipboard
[Bug]: Channel mismatch when running on linux
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui.
What happened?
Running control_v11f1p_sd15_depth [cfd03158], control_v11p_sd15_canny [d14c016b], or control_sd15_depth_anything [48a4bc3a] all result in
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 5, 64, 64] to have 4 channels, but got 5 channels instead
This problem does not occur when I run locally on my Mac. The above error is from running it on Linux.
I've tried enabling "Do not append detectmap to output" in Settings, but it did not fix it.
Steps to reproduce the problem
What should have happened?
Run smoothly.
Commit where the problem happens
webui: latest controlnet: latest
What browsers do you use to access the UI ?
No response
Command Line Arguments
export COMMANDLINE_ARGS="--api --share --enable-insecure-extension-access"
List of enabled extensions
The only non-built-in I have is sd-webui-controlnet.
Console logs
File "~/stable-diffusion-webui/modules/api/api.py", line 440, in img2imgapi [4/1912]
processed = process_images(p)
File "~/stable-diffusion-webui/modules/processing.py", line 734, in process_images
res = process_images_inner(p)
File "~/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 41, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args,**kwargs)
File "~/stable-diffusion-webui/modules/processing.py", line 868, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "~/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 438, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "~/stable-diffusion-webui/modules/processing.py", line 1527, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "~/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 188, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "~/stable-diffusion-webui/modules/sd_samplers_common.py", line 261, in launch_sampling
return func()
File "~/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 188, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "~/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 128, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 169, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "~/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "~/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "~/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
return self.__orig_func(*args, **kwargs)
File "~/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1339, in forward
out = self.diffusion_model(xc, t, context=cc)
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 862, in forward_webui
raise e
File "~/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 859, in forward_webui
return forward(*args, **kwargs)
File "~/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 581, in forward
control = param.control_model(
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 31, in forward
return self.control_model(*args, **kwargs)
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 310, in forward
h = module(h, emb, context)
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/openaimodel.py", line 102, in forward
x = layer(x)
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 501, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "~/stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 5, 64, 64] to have 4 channels, but got 5 channels instead