stable-diffusion icon indicating copy to clipboard operation
stable-diffusion copied to clipboard

Inpainting model

Open benedlore opened this issue 2 years ago • 23 comments

Is there a new inpainting model released for researchers or is it still currently the original latent diffusion model that is current most released?

benedlore avatar Aug 15 '22 01:08 benedlore

So far as I know, inpainting is not a capability that is specific to any particular trained model (e.g. set of network weights). Rather, at the heart of inpainting it is a piece of code that "freezes" one part of the image as it is being generated. There is actually code to do inpainting in the "scripts" directory ("inpaint.py"). I looked it over briefly, and it looks like you just have to supply a mask, which is a PNG file. The puzzling thing about this script is that it takes very few parameters. Maybe they just have hard-wired in some reasonable defaults.

My guess is that somebody could cobble together an inpainting example in a colab notebook without too much trouble. If somebody does this, please let the rest of us know!

gregturk avatar Aug 17 '22 01:08 gregturk

Ah, so instead of a separate model, it would just be the new SD model itself being used for the inpainting as well? I was under the impression it was a separate model.

benedlore avatar Aug 17 '22 01:08 benedlore

No dood, its already in there , you just need weights but theres no GUI for it

1blackbar avatar Aug 18 '22 20:08 1blackbar

where can I find the weights?

karray avatar Aug 23 '22 08:08 karray

When trying to run the inpainting script, I'm missing a file called last.ckpt Is this already available somewhere? Because placing the sd-v1-4.ckpt there doesn't seem to work

scyheidekamp avatar Aug 23 '22 14:08 scyheidekamp

I found the weights in a nearby repository:

wget -O models/ldm/inpainting_big/last.ckpt https://heibox.uni-heidelberg.de/f/4d9ac7ea40c64582b7c9/?dl=1

karray avatar Aug 23 '22 14:08 karray

are there any specific requirements for the inpainting model input? the result looked garbled up for me, as if the dimensions were improperly translated.

banteg avatar Aug 24 '22 08:08 banteg

No dood, its already in there , you just need weights but theres no GUI for it

You can try Lama Cleaner, it integrates multiple inpainting models, including LDM.

image

Sanster avatar Aug 26 '22 15:08 Sanster

this link doesnt seem to be working anymore, anybody got an updated link?

There is also a download script:

wget -O models/ldm/inpainting_big/model.zip https://ommer-lab.com/files/latent-diffusion/inpainting_big.zip

but I didn't try it.

karray avatar Aug 28 '22 15:08 karray

does anybody have a version of this inpainting script that involves also a text prompt so that the masked parts of the image are pushed in the direction of the text prompt, and only those areas, but taking into account the surroundings etc, thank you

javismiles avatar Aug 28 '22 21:08 javismiles

So hold on, someone said we could use the SD model, but then that didn't work, so people started downloading the old LDM model for inpainting?

benedlore avatar Aug 30 '22 19:08 benedlore

So hold on, someone said we could use the SD model, but then that didn't work, so people started downloading the old LDM model for inpainting?

I found something that claims to be using Stable Diffusion. Here's the walkthrough video: https://www.youtube.com/watch?v=N913hReVxMM

And here's the colab notebook: https://colab.research.google.com/drive/1R2HJvufacjy7GNrGCwgSE3LbQBk5qcS3?usp=sharing

Jellybit avatar Aug 31 '22 02:08 Jellybit

So hold on, someone said we could use the SD model, but then that didn't work, so people started downloading the old LDM model for inpainting?

SD is based on LDM. I guess the inpainting is a legacy example from that project

karray avatar Aug 31 '22 07:08 karray

this link doesnt seem to be working anymore, anybody got an updated link?

There is also a download script:

wget -O models/ldm/inpainting_big/model.zip https://ommer-lab.com/files/latent-diffusion/inpainting_big.zip

but I didn't try it.

The link works, but the archive seems to be the checkpoint itself, not an archive. I renamed it to last.ckpt The script runs with no errors, but I get a garbled result. But that might be an apple silicon problem...

krummrey avatar Aug 31 '22 10:08 krummrey

So hold on, someone said we could use the SD model, but then that didn't work, so people started downloading the old LDM model for inpainting?

I found something that claims to be using Stable Diffusion. Here's the walkthrough video: https://www.youtube.com/watch?v=N913hReVxMM

And here's the colab notebook: https://colab.research.google.com/drive/1R2HJvufacjy7GNrGCwgSE3LbQBk5qcS3?usp=sharing

there's some problems with that notebook, doesn't work out of the box at least.

jtac avatar Aug 31 '22 11:08 jtac

So to be clear, the inpainting we are all doing is the same, identical inpainting as before with LDM months ago before SD existed, right? The file from the link is the same inpainting checkpoint from months ago with good old LDM I think. I have not checked the notebook yet, but the would be the first thing claiming to use SD I think

benedlore avatar Aug 31 '22 14:08 benedlore

@benedlore I'm not completely sure, but I have the impression that the diffusers library (https://github.com/huggingface/diffusers) uses the main SD model for inpainting with it's own engine.

In this colab (https://colab.research.google.com/drive/1k9dnZDsVzKMk1-ZlBwZPUPVzDYZySmCQ) you can see it in use, and I don't see any other model used for inpainting but "CompVis/stable-diffusion-v1-4".

Here is the source code from the diffusers library for reference (https://github.com/huggingface/diffusers/blob/c7a3b2ed31ce3c49c8f9b84569fa67129bd59fa2/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py).

Therefore, it seems possible to use the v1.4 model for inpainting too, just not with the official SD repo.

siriux avatar Aug 31 '22 15:08 siriux

there's some problems with that notebook, doesn't work out of the box at least.

@jtac , The author updated the notebook with bug fixes here:

https://colab.research.google.com/drive/1R2HJvufacjy7GNrGCwgSE3LbQBk5qcS3?usp=sharing

I tested it, and it works.

Jellybit avatar Sep 01 '22 12:09 Jellybit

I found another implementation here:

https://colab.research.google.com/drive/1cd35l21ewU0fwWEnPjY_th5YORmMfZCd#scrollTo=U6Vf4xi_Prtv

It uses this UI which has inpainting as part of it:

https://github.com/hlky/stable-diffusion

Jellybit avatar Sep 02 '22 02:09 Jellybit

@Jellybit AFAIK the hlky fork doesn't have proper inpainting, just masking. It just performs diffusion as usual, and then applies the mask, but this can create artifacts on the boundaries. Real inpainting should take into account the frozen pixels outside of the mask to avoid seams/artifacts.

This is what is says in the Crop/Mask help:

Masking is not inpainting. You will probably get better results manually masking your images in photoshop instead.

It would be great if they implemented real inpainting because the hlky fork is one of the bests currently available in everything else.

siriux avatar Sep 02 '22 08:09 siriux

Actually, I just found this pull request where they do something in between masking and inpainting, it might be interesting to see how it compares to real inpainting. https://github.com/hlky/stable-diffusion-webui/pull/308

siriux avatar Sep 02 '22 08:09 siriux

Add Stable Diffusion 1.4 Inpainting in Lama Cleaner. It's based on the awesome diffusers library.

image

Original Image Inpainted
image image

Sanster avatar Sep 23 '22 01:09 Sanster

No dood, its already in there , you just need weights but theres no GUI for it

here is a GUI for inpaint https://github.com/CreamyLong/stable-diffusion/blob/master/scripts/inpaint_gradio.py

CreamyLong avatar Oct 12 '23 08:10 CreamyLong