ControlNet-for-Diffusers
ControlNet-for-Diffusers copied to clipboard
Do we need to do all those conversions for inpaining?
Hello Do we need to do all those conversions mentioned under "ControlNet + Anything-v3" for inpainting?
Also, in the inpainting guide, there are these lines
# we have downloaded models locally, you can also load from huggingface
# control_sd15_seg is converted from control_sd15_seg.safetensors using instructions above
pipe_control = StableDiffusionControlNetInpaintPipeline.from_pretrained("./diffusers/control_sd15_seg",torch_dtype=torch.float16).to('cuda')
pipe_inpaint = StableDiffusionInpaintPipeline.from_pretrained("./diffusers/stable-diffusion-inpainting",torch_dtype=torch.float16).to('cuda')
Can anyone help understand what does the 2nd line means?
Also, for pipe_inpaint
, do we pass the stable diffusion diffuser model path?
Thanks for your interest! @geekyayush
- Yes, you should strictly follow our instructions.
- pipe_inpaint is a inpainting model based on stable diffusion, we use runwayml/stable-diffusion-inpainting. We cannot directly load a stable diffusion model such as runwayml/stable-diffusion-v1-5, although they are both based on stable-diffusion-1.5, their input channels are different.
Hey just following up here ! This might be a newbie misconception but if we replace the unet here do we not loose the custom model in this case the Anything v3 . Or is it really just replacing the inpainting channels ?
Thanks @haofanwang !
I have another question regarding this. If I want to use an SD + Dreambooth trained inpainting fine-tuned model, then, will it work this line?
pipe_control = StableDiffusionControlNetInpaintPipeline.from_pretrained("./diffusers/control_sd15_seg",torch_dtype=torch.float16).to('cuda')
pipe_inpaint = StableDiffusionInpaintPipeline.from_pretrained("./diffusers/my-dreambooh-inpaint-model",torch_dtype=torch.float16).to('cuda')
Here, for the pipe_control, I am using the same control_sd15_seg
model and for pipe_inpaint
, I am using my custom trained model.
Thanks!
Let me answer all you guys concerns here.
@UglyStupidHonest You are right, for now, if you want to equip ControlNet with inpainting ability, you have to replace the whole base model, which means that you cannot use anything-v3 here. I did try to only replace the input layer and keep all other layers in anything-v3, but it works bad.
@geekyayush If you inpainting model has the exact same layers as stable-diffusion-1.5, then it should work. You can just take ControlNet as a pluggable module that can insert into all stable-diffusion-1.5 based models.
Is StableDiffusionControlNetInpaintPipeline currently operable? Trying the sample code in this repo with the provided input images, segmentation map, and specified models, gives the following result in my environment:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ
โ /ingest/ImageDiffuserService/client/inpaint_proto.py:31 in