ControlNet
ControlNet copied to clipboard
pixel_unshuffle expects width to be divisible by downscale_factor, but input.size(-1)=1000 is not divisible by 16
hello, with SDXL CN I keep getting pixel_unshuffle expects width to be divisible by downscale_factor, but input.size(-1)=1000 is not divisible by 16 Unless I changed the width and height to be around the same, so for example if I put the width 1000 and height 1200 I get this error but it's fine if I make both 1000 Getting this with canny, depth, scripple and lineart SDXL CN models, probably others too but I haven't tested them all yet. Is there something I am doing wrong?
In the above screenshot I used 1000x1010 resolution and like I said, if I use the exact same width and height it works fine (e.g. 800x800, 1000x1000 etc.).
Here's the full console error:
*** Error completing request *** Arguments: ('task(astt86172t72b46)', 0, 'ant drawing for kids', '', [], <PIL.Image.Image image mode=RGBA size=485x857 at 0x1673111B250>, None, None, None, None, None, None, 20, 'DPM++ 2S a Karras', 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0, 1100, 1000, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000001673111B430>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, False, False, 'base', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000167218112D0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000016721811B10>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000016721811EA0>, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, '', 1, 1, False, True, 1, 0, 0, False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, '*
CFG Scale
should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', 'Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 'Will upscale the image depending on the selected target size type
', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "G:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "G:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "G:\stable-diffusion-webui\modules\img2img.py", line 208, in img2img processed = process_images(p) File "G:\stable-diffusion-webui\modules\processing.py", line 732, in process_images res = process_images_inner(p) File "G:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "G:\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "G:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 451, in process_sample return process.sample_before_CN_hack(*args, **kwargs) File "G:\stable-diffusion-webui\modules\processing.py", line 1528, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "G:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "G:\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "G:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, insamples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "G:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 518, in sample_dpmpp_2s_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "G:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "G:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "G:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "G:\stable-diffusion-webui\modules\sd_models_xl.py", line 37, in apply_model return self.model(x, t, cond) File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "G:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "G:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, **kwargs) File "G:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward return self.diffusion_model( File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "G:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 858, in forward_webui raise e File "G:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 855, in forward_webui return forward(*args, **kwargs) File "G:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 592, in forward control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y) File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "G:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\adapter.py", line 70, in forward self.control = self.control_model(hint_in) File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "G:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\adapter.py", line 270, in forward x = self.unshuffle(x) File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\pixelshuffle.py", line 104, in forward return F.pixel_unshuffle(input, self.downscale_factor) RuntimeError: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=1096 is not divisible by 16
can confirm
- will fix in next release
Thank you @lllyasviel
I opened an issue on Mikubill/sd-webui-controlnet
moments after you posted your reply here and I didn't see it until after I posted there 😬
It's just I was confused where to open this issue here or on Mikubill's so I thought why not both since the issue breaks a big part of CN when using SDXL. 😁
@lllyasviel any idea when this will be fixed? thanks!
I'm also experiencing this issue.
Problem is still happening in January 2024. Happening a lot :/
I changed this length to a number divisible by 16 and it was for e.g. 720. I hope this helps someone, ":D
I changed this length to a number divisible by 16 and it was for e.g. 720. I hope this helps someone, ":D
510 also get this erro