ml-stable-diffusion
ml-stable-diffusion copied to clipboard
Inpainting support
does this support inpainting or do we have to wait further ?
+1 to this
I see there was some plans for it already, but commented out? Is anyone actively working on it?
https://github.com/apple/ml-stable-diffusion/blame/48f07f24891155a14c51dd835bba7371bdf32d0e/swift/StableDiffusion/pipeline/StableDiffusionPipeline.Configuration.swift#L14
I have read that in the python-diffusers world, inpainting works best with dedicated models that have an additional layer (in the Unet?) to directly accept the inpaint mask data. This helps preserve the mask edge details. I wonder if the ControlledUnet we now have also has an extra layer, to accept the ControlNet input, and if inpainting could leverage this, and be much simpler to implement now than it was before ControlNet?
Looks like this PR added hole punching support, allowing a ControlNet model to do inpainting. More info here https://github.com/godly-devotion/MochiDiffusion/pull/272
It works fine.