sd-webui-controlnet icon indicating copy to clipboard operation
sd-webui-controlnet copied to clipboard

[Bug] Error completing request

Open DarkVamprism opened this issue 2 years ago • 8 comments

I currently get this error if I try and generate images using controlnet, not sure what I am doing wrong.

I used to have MagicPrompt Installed but I had deleted it so I am not sure why it is showing below or if that even has anything to do with the issue

    Error completing request
    Arguments: ('task(ztdp0zfi4an3bzb)', 0, 'Man holding a beer', '', [], None, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, False, True, False, 0, -1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', True, 'scribble', 'control_scribble-fp16 [c508311e]', 1, {'image': array([[[255, 255, 255],
    [255, 255, 255],
    [255, 255, 255],
    ...,
    [255, 255, 255],
    [255, 255, 255],
    [255, 255, 255]],

   [[255, 255, 255],
    [255, 255, 255],
    [255, 255, 255],
    ...,
    [255, 255, 255],
    [255, 255, 255],
    [255, 255, 255]],

   [[255, 255, 255],
    [255, 255, 255],
    [255, 255, 255],
    ...,
    [255, 255, 255],
    [255, 255, 255],
    [255, 255, 255]],

   ...,

   [[255, 255, 255],
    [255, 255, 255],
    [255, 255, 255],
    ...,
    [255, 255, 255],
    [255, 255, 255],
    [255, 255, 255]],

   [[255, 255, 255],
    [255, 255, 255],
    [255, 255, 255],
    ...,
    [255, 255, 255],
    [255, 255, 255],
    [255, 255, 255]],

   [[255, 255, 255],
    [255, 255, 255],
    [255, 255, 255],
    ...,
    [255, 255, 255],
    [255, 255, 255],
    [255, 255, 255]]], dtype=uint8), 'mask': array([[[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   ...,

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]]], dtype=uint8)}, True, 'Scale to Fit (Inner Fit)', False, False, 512, 64, 64, True, -1.0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, False, 0, True, 384, 384, False, 2, True, True, False, False, 'Euler a', 0.95, 0.75, 'zero', 'pos', 'linear', 0.01, 0.0, 0.75, None, 'Lanczos', 1, 0, 0) {}
    Traceback (most recent call last):
      File "X:\Program Files\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
        res = list(func(*args, **kwargs))
      File "X:\Program Files\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "X:\Program Files\Stable Diffusion\stable-diffusion-webui\modules\img2img.py", line 85, in img2img
        image = init_img.convert("RGB")
    AttributeError: 'NoneType' object has no attribute 'convert'

DarkVamprism avatar Feb 17 '23 12:02 DarkVamprism

Could you share some errlog before Error completing request if available?

Mikubill avatar Feb 17 '23 12:02 Mikubill

Sorry I must be confused on how to run this, I get the error when I use the img2img tab, using a prompt and no image in the img2img upload area, but uploading my sketch to the ControlNet image upload. I just tried it by also uploading to the img2img upload area and it generated an image with no error.

I was using Scribble mode and putting a sketch in the controlnet upload, checking "Enable" and "Scribble Mode" because it was black pen on white background, and selecting sketch in Preprocessos as well as "control_sketch-fp16" in model with all other options default.

Am I supposed to have an image in the img2img upload, with a sketch in the ControlNet upload?

Edit: I'm coming from using the unprompted ControlNet implementation which used the img2img page so I wrongly assumed that img2img couldnt be done and that the img2img upload wasn't needed.

DarkVamprism avatar Feb 17 '23 13:02 DarkVamprism

I'm having the same issue. In img2img tab, entered a prompt, uploaded a sketch to the ControlNet image area but not the img2img area, checked Enable and Low VRAM, selected openpose preprocessor and the OpenPose model from here. Getting the exact same error log as DarkVamprism. I'm on a Mac if that matters.

I also tried it on txt2img and get a different error, below. It does generate an image from the prompt, but the image appears unrelated to the sketch I uploaded in ControlNet.

Error running process: /Users/tom/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py Traceback (most recent call last): File "/Users/tom/stable-diffusion-webui/modules/scripts.py", line 386, in process script.process(p, *script_args) File "/Users/tom/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 423, in process network = network_module( File "/Users/tom/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 90, in init p_new = p + unet_state_dict[key_name].clone().cpu() RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1

tomrecht avatar Feb 18 '23 08:02 tomrecht

Note that if you don't want to upload img to img2img area, simply use txt2img tab.

RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1

As for this, seems like the difference model you downloaded is not compatible with inpaint model - channels are different and couldn't merge automatically

Mikubill avatar Feb 18 '23 08:02 Mikubill

Thanks. I downloaded your model and tried that in txt2img, but am getting a different error:

Error completing request Arguments: ('task(ursjrizp1dv8tmh)', 'wizard', '', [], 5, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, True, 'openpose', 'control_sd15_openpose [fef5e48e]', 1, {'image': array([[[133, 162, 227], [133, 162, 227], [133, 162, 227], ..., [131, 159, 224], [131, 159, 224], [131, 159, 224]],

   [[133, 163, 227],
    [133, 163, 227],
    [133, 163, 227],
    ...,
    [131, 160, 224],
    [131, 160, 224],
    [131, 160, 224]],

   [[133, 163, 227],
    [133, 163, 227],
    [133, 163, 227],
    ...,
    [131, 160, 224],
    [131, 160, 224],
    [131, 160, 224]],

   ...,

   [[193, 197, 206],
    [193, 197, 206],
    [193, 197, 206],
    ...,
    [193, 197, 206],
    [193, 197, 206],
    [193, 197, 206]],

   [[193, 197, 206],
    [193, 197, 206],
    [193, 197, 206],
    ...,
    [193, 197, 206],
    [193, 197, 206],
    [193, 197, 206]],

   [[193, 197, 206],
    [193, 197, 206],
    [193, 197, 206],
    ...,
    [193, 197, 206],
    [193, 197, 206],
    [193, 197, 206]]], dtype=uint8), 'mask': array([[[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   ...,

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]]], dtype=uint8)}, False, 'Scale to Fit (Inner Fit)', False, True, 512, 64, 64, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}

Traceback (most recent call last): File "/Users/tom/stable-diffusion-webui/modules/call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "/Users/tom/stable-diffusion-webui/modules/call_queue.py", line 37, in f res = func(*args, **kwargs) File "/Users/tom/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img processed = process_images(p) File "/Users/tom/stable-diffusion-webui/modules/processing.py", line 486, in process_images res = process_images_inner(p) File "/Users/tom/stable-diffusion-webui/modules/processing.py", line 628, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "/Users/tom/stable-diffusion-webui/modules/processing.py", line 828, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/Users/tom/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 323, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/Users/tom/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 221, in launch_sampling return func() File "/Users/tom/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 323, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/Users/tom/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/Users/tom/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 116, in forward x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]}) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/Users/tom/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 114, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "/Users/tom/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 140, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "/Users/tom/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "/Users/tom/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, **kwargs) File "/Users/tom/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/Users/tom/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1333, in forward out = self.diffusion_model(xc, t, context=cc) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/Users/tom/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 169, in forward2 return forward(*args, **kwargs) File "/Users/tom/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 135, in forward control = outer.control_model(x=x, hint=outer.hint_cond, timesteps=timesteps, context=context) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/Users/tom/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 457, in forward h = module(h, emb, context) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/Users/tom/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 86, in forward x = layer(x) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/Users/tom/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 182, in lora_Conv2d_forward return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input)) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 453, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 9, 64, 64] to have 4 channels, but got 9 channels instead

tomrecht avatar Feb 18 '23 09:02 tomrecht

What base model are you using? try something like sd1.4/sd1.5/any3 etc.

Mikubill avatar Feb 18 '23 09:02 Mikubill

That was with sd.15-inpainting but I just tried sd1.4 and it generates an image in txt2img without an error. The image doesn't seem to have much relation to the sketch I uploaded, but maybe I just need to play with settings.

In addition to the generated image it also shows this strange bit of abstract art -- is that expected?

tomrecht avatar Feb 18 '23 20:02 tomrecht

RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1

Affirmative, this only occurs with the sd.15-inpaint with any controlnet model (Hed/Canny,OpenPose, etc.). Same error occurred here during an inpaint using SD1.5-inpaint model with any controlnet model. Switched to another inpaint model the controlnet process went fine without error.

edwios avatar Mar 05 '23 12:03 edwios