comfyui-inpaint-nodes icon indicating copy to clipboard operation
comfyui-inpaint-nodes copied to clipboard

LaMa and MAT's result is different with original repo

Open jarheadjoe opened this issue 1 year ago • 3 comments

For example, I want to remove something from original image: 把背后的路人删除 jpg_input Original LaMa output: 把背后的路人删除 jpg_res Your node output: 把背后的路人删除 jpg_res The painted area is blurry and border is clear. The same with MAT.

jarheadjoe avatar Jun 26 '24 08:06 jarheadjoe

What original repo are you referring to?

The reason it's blury is that it runs at low resolution (256 for LaMA, 512 for MAT). You can technically run it at higher resolution, but it results in those grainy patterns, I don't find it very useful.

To inpaint this image I'd downscale it, use Lama/MAT inpaint at low resolution, do a 1st diffusion pass, upscale and crop, then run a 2nd diffusion pass at original resolution but only the inpaint area. So Lama/MAT is meant as a first step in a pipeline, not a final solution.

Acly avatar Jun 29 '24 14:06 Acly

original lama repo: https://github.com/advimman/lama.Use big-lama ckpt. Original LaMa works good some way. For example, as depth controlnet's input. The LaMa results in your repo are bad and almost like blurring to a certain extent. And for inpaint models, the mask area is not visible. So Lama feels unnecessary as a first step https://github.com/comfyanonymous/ComfyUI/blob/14764aa2e2e2b282c4a4dffbfab4c01d3e46e8a7/nodes.py#L346

jarheadjoe avatar Jul 23 '24 06:07 jarheadjoe

I don't use VAEEncodeForInpaint and I downscale/crop images which makes the low resolution less of an issue. Lama still helps at 1.0 denoise as a base for conservative inpainting (remove objects and such).

For a stand-alone solution that you can slap on an image like in your example, the node would have to be more complex. I'm not particularly motivated to go for that because I don't think results are good enough in general.

Acly avatar Jul 23 '24 08:07 Acly