krita-ai-diffusion icon indicating copy to clipboard operation
krita-ai-diffusion copied to clipboard

Desaturated results with inpainting

Open JFMugen opened this issue 6 months ago • 3 comments

Hey there, inpainting is the main reason I want to use this tool but there is always a very visible color difference with inpainting. For example,

Image

Image

Or another even more visible one

Image

Image

Both are same models, and I also tried with different models as well but its always the same look/issue.

Any way I can do something about this? I am pretty new to this so I would appreciate if there is something obvious I don't know.

JFMugen avatar May 22 '25 17:05 JFMugen

Another more obvious example

Image

Image

I also saw an old topic about this but there was no image examples and it was a year old at this point so I decided to create a new one.

I also tried with sd.next and not having this issue there. There may be a slight color difference but its usually hard to notice without comparing with original image. Thats why I wanted to show examples and make sure to show its more than just slight difference.

JFMugen avatar May 22 '25 17:05 JFMugen

You can manually work around this with the AI Segmentation plugin from https://github.com/Acly/krita-ai-tools, it makes it much easier to select exactly what you want to inpaint (though you'll still have the color problem unless you set your grow/feather/padding to 0 under Settings - Diffusion), or select the person or object you just inpainted, invert the selection, and delete the desaturated parts. You can also generate a Scribble or Canny ControlNet image and just regenerate the entire composition and it will be colored correctly, though maybe not exactly the way you had it colored before. I agree it would be great if there was a technical way to adjust this behavior and save a lot of time though.

minecraftman111 avatar May 23 '25 04:05 minecraftman111

You can manually work around this with the AI Segmentation plugin from https://github.com/Acly/krita-ai-tools, it makes it much easier to select exactly what you want to inpaint

While there are some situations that this would work, it doesn't help just adding a cloud like I shared above. Thanks for suggestion tho

You can also generate a Scribble or Canny ControlNet image and just regenerate the entire composition and it will be colored correctly, though maybe not exactly the way you had it colored before.

Can you explain this a bit more, I am pretty new to these stuff and I don't know how to do that

I agree it would be great if there was a technical way to adjust this behavior and save a lot of time though.

Definitely. I hope it can be fixed.

edit.

Looks like it is possible to use VAE to fix color issues? There is a folder for vae in models folder but I am not sure if its possible to use

https://huggingface.co/stabilityai/sdxl-vae/blob/main/sdxl_vae.safetensors

JFMugen avatar May 23 '25 09:05 JFMugen

You can also generate a Scribble or Canny ControlNet image and just regenerate the entire composition and it will be colored correctly, though maybe not exactly the way you had it colored before.

Can you explain this a bit more, I am pretty new to these stuff and I don't know how to do that

It's in the guide: https://docs.interstice.cloud/control-layers/

minecraftman111 avatar May 23 '25 20:05 minecraftman111

It's in the guide: https://docs.interstice.cloud/control-layers/

Hmm, this is basically recreating the image with a guide right? Thanks for letting me know but its definitely not the solution for the color problem :/

JFMugen avatar May 24 '25 08:05 JFMugen

Exactly how seamless inpainting is depends on a lot of things. Generally no model I know is able to match colors in all cases. The main difference is how obvious it shows.

  1. Is this about "true" inpainting (strength is 100%) or img2img/refinement?
  2. Which model was used? Some models (SD1.5/SDXL) have inpaint models, others (Pony/Illustrious...) do not.
  3. Selection grow/feather have a big impact.

The worst case is inpainting relatively small regions where the border is a plain color or gradient. It makes color shifts very obvious, where-as with larger areas and high-frequency detail, the model has a lot of room to compensate for its bias/deficiency.

The one difference between the plugin and UIs like Forge is probably that it uses differential diffusion by default. This generates better transitions between old content and new content if they are different. But compared to alpha blending it's worse at hiding color shifts from the model. Alpha blending looks bad if you blend different content on top of each other - but for things like gradient backgrounds (sky etc) it's perfect. Other UIs typically use alpha blending, so perform better in that perticular case.

Note that you can always fix by introducing more alpha-blending with a soft eraser, smoothing the transition. Whereas noise-blending transitions can only be done at generation time. See also #557 https://github.com/Acly/krita-ai-diffusion/issues/662#issuecomment-2092485463

Acly avatar Jun 05 '25 16:06 Acly

@Acly actually I might have new information for the issue. I was using sd.next, and they have an option for Remote VAE which uses huggingface's VAE service. I personally have no idea why, but while using that the colors were always a tad desaturated. Others also confirmed that. While using local VAE its all good, good colors while inpainting etc.

So maybe a similar thing is happening here? Something to do with VAE decode?

Because this is not a "colors different from original image" issue, its just always desaturated. I never saw it more saturated for example.

Also to answers your questions,

  • Yeah, true inpainting.

  • SDXL models, tried few but all same. Not inpaint specific models.

  • It helps by making the issue less visible in some situations, but the issue is still there.

JFMugen avatar Jun 05 '25 16:06 JFMugen

I don't think it's VAE decode. This is quite easy to test (take an image and VAE encode + decode it). The process is not lossless, but does not result in these kinds of saturation/brightness issues. I also have a fair number of inpainting tests with scenarios that produce great results, and use the same process. Yet it runs into issues when the surrounding image is eg. uniformly dark or bright.

Best way to rule out other issues is to provide a full .kra file of a failure case with the active inpaint selection.

Acly avatar Jun 05 '25 21:06 Acly

@Acly here some examples and kra file, this one is the original image (I draw white as a guide, helps with sd.next)

Image

This one with sd.next - full VAE

Image

This one sd.next - remote VAE (huggingface thing). And yeah doesn't look like the same issue

Image

This one with Krita, actually the result is a lot better but as you can see its a bit desaturated/darker as always

Image

And this is .kra file

https://fex.net/s/oyxn1pl

Hope it helps

JFMugen avatar Jun 06 '25 11:06 JFMugen

Was the white paint-over part of the image when you generated? And was the selection inside or outside of it? The colors at the selection border can make a difference, and should match the background.

Anyway this is a pretty typical failure case scenario (plain background, cloud is very bright, so model compensates by making the background darker).

Here is what I get with built-in Digital Artwork XL style (I open your .kra, select around ... not inside ... the white area and Fill).

Image Image

Different model (also SDXL):

Image Image

Keeping that model, but turning off Differential Diffusion: Image Image

... still visible halo, but subtle, and it really depends on the model & content.

It's possible Forge has some kind of automatic color-correction (I think A1111 had this). Might help in this case, but it's difficult or impossible to get right for general case (it might make things worse in other scenarios).

Flux-fill (which doesn't have the average-brightness problem that SDXL models have): Image

Acly avatar Jun 06 '25 12:06 Acly

@Acly hmm as you said Digital Artwork XL has no issues.

But may I ask how to disable differential diffusion? Couldn't find any setting.

Also while we are at it, I saw that there is a fooocus inpaint patch inside the models folder. It is used by default?

About flux, I doubt my 6700XT can handle it :/

JFMugen avatar Jun 07 '25 08:06 JFMugen

But may I ask how to disable differential diffusion? Couldn't find any setting.

Can't disable it inside the plugin atm, I used the workflow in ComfyUI directly and disabled it there to see if it would fix the issues with some models. It helps a little, but doesn't seem to be the main culprit. Doesn't prevent the result from being too dark, just makes it slightly less obvious.

Also while we are at it, I saw that there is a fooocus inpaint patch inside the models folder. It is used by default?

Yes. This you can toggle when you use "Generate (Custom)" instead of "Fill" and disable "Seamless" option.

It's possible some models are finetuned "harder" (further away from base SDXL) and don't merge with the inpaint model perfectly. Don't see a way to fix that though, you'd have to train or finetune an inpaint model for them specifically.

Acly avatar Jun 09 '25 07:06 Acly

@Acly Then I guess it is what it is for now. Thanks for all the replies, I guess I will check again later with a better hardware for better models.

JFMugen avatar Jun 09 '25 08:06 JFMugen