ControlNet-for-Diffusers
ControlNet-for-Diffusers copied to clipboard
Transfer the ControlNet with any basemodel in diffusers🔥
Thanks for your work, After I converted the Controlnet.pth to diffusers, an error still occurred while loading. Did my conversion fail?How can I solve this problem?The error log is as...
According to the effect of sd_inpaint, mask_image is used to control the area that needs to be regenerated, so the difficulty is actually the generation of mask image. From the...
python ./scripts/convert_controlnet_to_diffusers.py --checkpoint_path control_any3_openpose.pth --dump_path control_any3_openpose --device cpu Did not find file convert_controlnet_to_diffusers.py
any chance you can add an example of using control net with img2img to the Collab doc (without inpainting)? I followed the instructions and tried adding the StableDiffusionControlNetInpaintImg2ImgPipeline class without...
Thank you for your work. When I tried to reproduce your second part of image repair work, an error occurred. Your code included KarrasDiffusionSchedulers, but this class does not exist...
It looks like something has changed with diffusers and it doesn't have this param:    and after comparing the `AnyV3_Canny_Model.pth` created to the WebUI (using same prompt, seed etc.),...
@haofanwang There are two pipelines, one for depth and one for openpose, lets call them pipe_control_depth, pipe_control_openpose. Which pipeline should be used for generating the output? What does the input...
I tried to test the two algorithms but I feel that the effect is similar. Can I understand that the img2img version takes into account the image input allowed by...