diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Image inpainting

Open krrishdholakia opened this issue 3 years ago • 4 comments

Hi,

2 quick questions around this:

  • Is there any colab / guiding doc around leveraging this model for image inpainting?
  • Given a source person + t-shirt image, how can i use a guided text prompt (i.e. "show person wearing this t-shirt") to generate an image of the same?

krrishdholakia avatar Jul 31 '22 07:07 krrishdholakia

Did some further research:

  • so if i have the cloth mask, cloth image, human image, human image parsed, human pose --> what is a way i can concatenate these together to present the input image to the diffuser model, and have it generate an output and then match that against the expected output?

  • ideally, i could just concatenate the cloth image + human image and check output against the expected one.

open to thoughts/ ways of doing this .

krrishdholakia avatar Jul 31 '22 08:07 krrishdholakia

hi @anton-l,

just wanted to circle back to this. I'm not sure how i could concat the 2 images and pass that + output through the diffusion model. Curious if you might have any ideas for how to approach this?

cc: @patrickvonplaten, @patil-suraj

krrishdholakia avatar Aug 13 '22 01:08 krrishdholakia

Hi @krrishdholakia! By setting in_channels and out_channels in the UNet configuration you can adapt it to concatenated input and outputs, e.g. in_channels=6 for two concatenated input images.

anton-l avatar Aug 13 '22 07:08 anton-l

@anton-l How would you calculate loss at the interim stages for this? since you want it to generate a target image different (i.e. person wearing the clothing) from the concatenated images (clothing item + source person image)

# Predict the noise residual noise_pred = model(noisy_images, timesteps)["sample"] loss = F.mse_loss(noise_pred, noise) accelerator.backward(loss)

krrishdholakia avatar Aug 14 '22 06:08 krrishdholakia

hey @anton-l just wanted to follow up on this

cc: @patil-suraj @patrickvonplaten

krrishdholakia avatar Aug 28 '22 00:08 krrishdholakia

@krrishdholakia the idea would be to feed the concatenated clothing + person images (6 channels), and have 6 channels as output as well (since the number of channels needs to match to compute the residuals). Then the first (or last) 3 channels of the output would be your predicted clothed person, and the other 3 channels can be discarded (not used for the loss calculation). This is similar to how super-resolution is done with diffusion models.

anton-l avatar Aug 29 '22 15:08 anton-l

Hey @krrishdholakia not quite what you're looking for, but we now have an in-painting example with stable diffusion here https://github.com/huggingface/diffusers/tree/main/examples/inference#in-painting-using-stable-diffusion

patil-suraj avatar Aug 29 '22 16:08 patil-suraj