diffusers
                                
                                 diffusers copied to clipboard
                                
                                    diffusers copied to clipboard
                            
                            
                            
                        Image inpainting
Hi,
2 quick questions around this:
- Is there any colab / guiding doc around leveraging this model for image inpainting?
- Given a source person + t-shirt image, how can i use a guided text prompt (i.e. "show person wearing this t-shirt") to generate an image of the same?
Did some further research:
- 
so if i have the cloth mask, cloth image, human image, human image parsed, human pose --> what is a way i can concatenate these together to present the input image to the diffuser model, and have it generate an output and then match that against the expected output? 
- 
ideally, i could just concatenate the cloth image + human image and check output against the expected one. 
open to thoughts/ ways of doing this .
hi @anton-l,
just wanted to circle back to this. I'm not sure how i could concat the 2 images and pass that + output through the diffusion model. Curious if you might have any ideas for how to approach this?
cc: @patrickvonplaten, @patil-suraj
Hi @krrishdholakia! By setting in_channels and out_channels in the UNet configuration you can adapt it to concatenated input and outputs, e.g. in_channels=6 for two concatenated input images.
@anton-l How would you calculate loss at the interim stages for this? since you want it to generate a target image different (i.e. person wearing the clothing) from the concatenated images (clothing item + source person image)
# Predict the noise residual noise_pred = model(noisy_images, timesteps)["sample"] loss = F.mse_loss(noise_pred, noise) accelerator.backward(loss)
hey @anton-l just wanted to follow up on this
cc: @patil-suraj @patrickvonplaten
@krrishdholakia the idea would be to feed the concatenated clothing + person images (6 channels), and have 6 channels as output as well (since the number of channels needs to match to compute the residuals). Then the first (or last) 3 channels of the output would be your predicted clothed person, and the other 3 channels can be discarded (not used for the loss calculation). This is similar to how super-resolution is done with diffusion models.
Hey @krrishdholakia not quite what you're looking for, but we now have an in-painting example with stable diffusion here https://github.com/huggingface/diffusers/tree/main/examples/inference#in-painting-using-stable-diffusion