sd-webui-controlnet
sd-webui-controlnet copied to clipboard
[Bug] Error completing request
I currently get this error if I try and generate images using controlnet, not sure what I am doing wrong.
I used to have MagicPrompt Installed but I had deleted it so I am not sure why it is showing below or if that even has anything to do with the issue
Error completing request
Arguments: ('task(ztdp0zfi4an3bzb)', 0, 'Man holding a beer', '', [], None, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, False, True, False, 0, -1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', True, 'scribble', 'control_scribble-fp16 [c508311e]', 1, {'image': array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
...,
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=uint8), 'mask': array([[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
...,
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]]], dtype=uint8)}, True, 'Scale to Fit (Inner Fit)', False, False, 512, 64, 64, True, -1.0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, False, 0, True, 384, 384, False, 2, True, True, False, False, 'Euler a', 0.95, 0.75, 'zero', 'pos', 'linear', 0.01, 0.0, 0.75, None, 'Lanczos', 1, 0, 0) {}
Traceback (most recent call last):
File "X:\Program Files\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "X:\Program Files\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "X:\Program Files\Stable Diffusion\stable-diffusion-webui\modules\img2img.py", line 85, in img2img
image = init_img.convert("RGB")
AttributeError: 'NoneType' object has no attribute 'convert'
Could you share some errlog before Error completing request
if available?
Sorry I must be confused on how to run this, I get the error when I use the img2img tab, using a prompt and no image in the img2img upload area, but uploading my sketch to the ControlNet image upload. I just tried it by also uploading to the img2img upload area and it generated an image with no error.
I was using Scribble mode and putting a sketch in the controlnet upload, checking "Enable" and "Scribble Mode" because it was black pen on white background, and selecting sketch in Preprocessos as well as "control_sketch-fp16" in model with all other options default.
Am I supposed to have an image in the img2img upload, with a sketch in the ControlNet upload?
Edit: I'm coming from using the unprompted ControlNet implementation which used the img2img page so I wrongly assumed that img2img couldnt be done and that the img2img upload wasn't needed.
I'm having the same issue. In img2img tab, entered a prompt, uploaded a sketch to the ControlNet image area but not the img2img area, checked Enable and Low VRAM, selected openpose preprocessor and the OpenPose model from here. Getting the exact same error log as DarkVamprism. I'm on a Mac if that matters.
I also tried it on txt2img and get a different error, below. It does generate an image from the prompt, but the image appears unrelated to the sketch I uploaded in ControlNet.
Error running process: /Users/tom/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py Traceback (most recent call last): File "/Users/tom/stable-diffusion-webui/modules/scripts.py", line 386, in process script.process(p, *script_args) File "/Users/tom/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 423, in process network = network_module( File "/Users/tom/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 90, in init p_new = p + unet_state_dict[key_name].clone().cpu() RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1
Note that if you don't want to upload img to img2img area, simply use txt2img tab.
RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1
As for this, seems like the difference model you downloaded is not compatible with inpaint model - channels are different and couldn't merge automatically
Thanks. I downloaded your model and tried that in txt2img, but am getting a different error:
Error completing request Arguments: ('task(ursjrizp1dv8tmh)', 'wizard', '', [], 5, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, True, 'openpose', 'control_sd15_openpose [fef5e48e]', 1, {'image': array([[[133, 162, 227], [133, 162, 227], [133, 162, 227], ..., [131, 159, 224], [131, 159, 224], [131, 159, 224]],
[[133, 163, 227],
[133, 163, 227],
[133, 163, 227],
...,
[131, 160, 224],
[131, 160, 224],
[131, 160, 224]],
[[133, 163, 227],
[133, 163, 227],
[133, 163, 227],
...,
[131, 160, 224],
[131, 160, 224],
[131, 160, 224]],
...,
[[193, 197, 206],
[193, 197, 206],
[193, 197, 206],
...,
[193, 197, 206],
[193, 197, 206],
[193, 197, 206]],
[[193, 197, 206],
[193, 197, 206],
[193, 197, 206],
...,
[193, 197, 206],
[193, 197, 206],
[193, 197, 206]],
[[193, 197, 206],
[193, 197, 206],
[193, 197, 206],
...,
[193, 197, 206],
[193, 197, 206],
[193, 197, 206]]], dtype=uint8), 'mask': array([[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
...,
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]]], dtype=uint8)}, False, 'Scale to Fit (Inner Fit)', False, True, 512, 64, 64, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
File "/Users/tom/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/Users/tom/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/Users/tom/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/Users/tom/stable-diffusion-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/Users/tom/stable-diffusion-webui/modules/processing.py", line 628, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/Users/tom/stable-diffusion-webui/modules/processing.py", line 828, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/Users/tom/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 323, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/Users/tom/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 221, in launch_sampling
return func()
File "/Users/tom/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 323, in
What base model are you using? try something like sd1.4/sd1.5/any3 etc.
That was with sd.15-inpainting but I just tried sd1.4 and it generates an image in txt2img without an error. The image doesn't seem to have much relation to the sketch I uploaded, but maybe I just need to play with settings.
In addition to the generated image it also shows this strange bit of abstract art -- is that expected?
RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1
Affirmative, this only occurs with the sd.15-inpaint with any controlnet model (Hed/Canny,OpenPose, etc.). Same error occurred here during an inpaint using SD1.5-inpaint model with any controlnet model. Switched to another inpaint model the controlnet process went fine without error.