sd-webui-controlnet
sd-webui-controlnet copied to clipboard
When "Only masked" is specified for Inpaint, the image is not drawn correctly
When "Only masked" is specified for Inpaint in the img2img tab, ControlNet may not render the image correctly.
When specifying "Only masked", I think it is necessary to crop the input image generated by the preprocessor and apply it within the masked range.
An example is shown below.
This is the original image.
This is the result of inpainting with "Only masked" and generating and applying a normal map.
are you using an inpainting model? may also try increasing padding pixels
Definitely looks like the control image isn't cropped and zoomed in on like the source image is. Adding more padding pixels wouldn't help.
wait- i thought inpainting isnt yet possible with control net?
img2img already works, including masking. Could be that inpainting-specific models don't work yet?
I have inpainted an image of a cat using sd-v1-4.ckpt.
Increasing the "Only masked padding, pixels" value does indeed result in correct rendering, but I suspect that this is essentially the same as specifying "Whole picture".
In addition, when inpainting using sd-v1-5-inpainting.ckpt with ControlNet enabled, the following error occurs.
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 9, 64, 96] to have 4 channels, but got 9 channels instead
This may be another problem.
Seconded, hopefully support for inpainting specific models can be implemented, it will greatly enhance the process.
This is still not working correctly, its one of the most important features in webui, i think the pipeline order needs to be changed to take into consideration the actual inpainted area only and not the whole image In inpaint only masked mode the masked area becomes the size of the whole synthesized image resolutiion so it becomes 512x512 for just the face of the cat, this greatly improves details. The actual part of the image thats being sent to denoiser would have to be hijacked by controlnet as well to determine what area needs to be used and whats to be ignored
Inpainting model support would be killer. Hopefully he implements.
same get error RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 9, 64, 96] to have 4 channels, but got 9 channels instead , please add support for the sd-v1-5-inpainting model
fixed in https://github.com/Mikubill/sd-webui-controlnet/commit/da7a3609e1011b7a2a1f020b77cb630743d11b2f
Still get same RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 9, 64, 64] to have 4 channels, but got 9 channels instead
error when trying to use an inpainting model
same
Both the "Only masked" option and drawing in the inpainting model now work well. Thank you very much.
I'm still getting this error when trying to use the 1.5 inpaint model:
RuntimeError: The size of tensor a (4) must match the size of tensor b (9) at non-singleton dimension 1
I'm on the latest ControlNet commit attempting to use the 1.5 inpainting checkpoint.