sd-webui-controlnet icon indicating copy to clipboard operation
sd-webui-controlnet copied to clipboard

Everything was working fine until it started giving me this error.

Open ZCryler1 opened this issue 1 year ago • 6 comments

Traceback (most recent call last): File "B:\A.I\stable-diffusion-webui\modules\scripts.py", line 386, in process script.process(p, *script_args) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 487, in process detected_map = preprocessor(input_image, res=pres, thr_a=pthr_a, thr_b=pthr_b) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\processor.py", line 130, in openpose result, _ = model_openpose(img, has_hand) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\openpose_init_.py", line 40, in apply_openpose candidate, subset = body_estimation(oriImg) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\openpose\body.py", line 63, in call paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\alloc.cpp:73: error: (-4:Insufficient memory) Failed to allocate 139460608 bytes in function 'cv::OutOfMemoryError'

0%| | 0/16 [00:00<?, ?it/s]Error executing callback cfg_denoiser_callback for B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py Traceback (most recent call last): File "B:\A.I\stable-diffusion-webui\modules\script_callbacks.py", line 161, in cfg_denoiser_callback c.callback(params) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 121, in guidance_schedule_handler self.guidance_stopped = (x.sampling_step / x.total_sampling_steps) > self.stop_guidance_percent File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1269, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'PlugableControlModel' object has no attribute 'stop_guidance_percent'

0%| | 0/16 [00:07<?, ?it/s] Error completing request Arguments: ('task(jm5doxexoshz88o)', 0, 'masterpiece, best quality, illustration, upper body, 1boy walking, looking at viewer, green hair, medium hair, yellow eyes, demon horns, black coat,cyberpunk city, trending on artstation,4k,', 'lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name', [], <PIL.Image.Image image mode=RGBA size=259x898 at 0x1A052318100>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, True, 'openpose', 'control_sd15_openpose [fef5e48e]', 1, {'image': array([[[132, 169, 238], [132, 169, 238], [132, 169, 238], ..., [132, 169, 238], [132, 169, 238], [132, 169, 238]],

   [[132, 169, 238],
    [132, 169, 238],
    [132, 169, 238],
    ...,
    [132, 169, 238],
    [132, 169, 238],
    [132, 169, 238]],

   [[132, 169, 238],
    [132, 169, 238],
    [132, 169, 238],
    ...,
    [132, 169, 238],
    [132, 169, 238],
    [132, 169, 238]],

   ...,

   [[192, 197, 207],
    [192, 197, 207],
    [192, 197, 207],
    ...,
    [192, 197, 207],
    [192, 197, 207],
    [192, 197, 207]],

   [[192, 197, 207],
    [192, 197, 207],
    [192, 197, 207],
    ...,
    [192, 197, 207],
    [192, 197, 207],
    [192, 197, 207]],

   [[192, 197, 207],
    [192, 197, 207],
    [192, 197, 207],
    ...,
    [192, 197, 207],
    [192, 197, 207],
    [192, 197, 207]]], dtype=uint8), 'mask': array([[[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   ...,

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]],

   [[  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    ...,
    [  0,   0,   0, 255],
    [  0,   0,   0, 255],
    [  0,   0,   0, 255]]], dtype=uint8)}, False, 'Just Resize', False, True, 512, 64, 64, 1, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0) {}

Traceback (most recent call last): File "B:\A.I\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "B:\A.I\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "B:\A.I\stable-diffusion-webui\modules\img2img.py", line 171, in img2img processed = process_images(p) File "B:\A.I\stable-diffusion-webui\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "B:\A.I\stable-diffusion-webui\modules\processing.py", line 632, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "B:\A.I\stable-diffusion-webui\modules\processing.py", line 1048, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "B:\A.I\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 322, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "B:\A.I\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 225, in launch_sampling return func() File "B:\A.I\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 322, in samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "B:\A.I\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "B:\A.I\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 123, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]}) File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "B:\A.I\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "B:\A.I\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "B:\A.I\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "B:\A.I\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, **kwargs) File "B:\A.I\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "B:\A.I\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward out = self.diffusion_model(x, t, context=cc) File "B:\A.I\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 168, in forward2 return forward(*args, **kwargs) File "B:\A.I\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 125, in forward assert outer.hint_cond is not None, f"Controlnet is enabled but no input image is given" AssertionError: Controlnet is enabled but no input image is given

ZCryler1 avatar Feb 20 '23 02:02 ZCryler1

Restart your WebUI may help.

Mikubill avatar Feb 20 '23 02:02 Mikubill

i need to draw the whole image for it to work

ZCryler1 avatar Feb 20 '23 02:02 ZCryler1

I was getting this error too when using the OpenPose Editor addon, I selected openpose as the preprocessor instead of "none"

zatt avatar Feb 20 '23 23:02 zatt

same here, I am using chilloutmix model + openpose in controlnet

Update: it is a stupid internet connection problem again, I solved it by download "body_pose_model.pth" and "hand_pose_model.pth" from https://huggingface.co/lllyasviel/ControlNet/tree/main/annotator/ckpts

and place these two files in \stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\openpose then it works for me.

moopwd avatar Feb 21 '23 05:02 moopwd

Had also been dabbling with openpose prior to getting this error. Disabled controlnet via the enable button, removed images and changed preprocessor and model to none but still recieve the error. Restarting webui worked, seems openpose doesn't "de-load" itself or something?

Todgins avatar Feb 21 '23 06:02 Todgins

same issue while using img2img, commenting for the updates.

Mistborn-First-Era avatar Feb 21 '23 19:02 Mistborn-First-Era