stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Feature] the blank part of PNG can be filled with white background

Open sena-nana opened this issue 1 year ago • 1 comments

During my previous drawing with AI, I found that Novel would fill the blank part of the png image with white, while SD would fill it with black. After comparing and testing, using white fill the model would draw shadows on the blank part and sometimes other content, which is more natural and harmonious overall. While using black fill, the model will strictly follow the outline of the picture, sometimes it will look a little strange. So I added a button to img2img, which allows SD to pre-fill the blank part with white. Again, I added this option in Api, limited to img2img.

Translated with www.DeepL.com/Translator (free version)

sena-nana avatar Dec 04 '22 11:12 sena-nana

This would help me immensely.

Mousewrites avatar Dec 11 '22 02:12 Mousewrites

Great work, this will help me too!

Could you make it configurable, even if only in a json rather than the UI?

I've been manually trying to repair transparency in generated images. The workflow is process the image multiple times with black, grey, white, red, green, blue backgrounds and create a mask from that.

Having a textbox where I can put "change transparency to color: #hex [ ff0000 ] " would be great.

Until the models support transparency natively, you could automatically replicate my workflow by processing the image 6 times (with 000 black, fff white, 888 gray, f00 red, 0f0 green, 00f blue) and then create a mask layer from that, when the output image matches those colours > threshold A, e.g. 90% in > threshold B e.g. 5 out 6 cases.

Luke2642 avatar Dec 14 '22 12:12 Luke2642

Great work, this will help me too!

Could you make it configurable, even if only in a json rather than the UI?

I've been manually trying to repair transparency in generated images. The workflow is process the image multiple times with black, grey, white, red, green, blue backgrounds and create a mask from that.

Having a textbox where I can put "change transparency to color: #hex [ ff0000 ] " would be great.

Until the models support transparency natively, you could automatically replicate my workflow by processing the image 6 times (with 000 black, fff white, 888 gray, f00 red, 0f0 green, 00f blue) and then create a mask layer from that, when the output image matches those colours > threshold A, e.g. 90% in > threshold B e.g. 5 out 6 cases.

Now I changed the option to color picker, and I can customize the background color in both UI and API, but I don't know why the test failed, because all the tests passed properly locally If you want to automatically generate masks by color, I think this is probably beyond the scope of this feature, if I write a built-in tool I may need to open another pr.

sena-nana avatar Dec 14 '22 15:12 sena-nana

I decided to implement this in https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/9441c28c947588d756e279a8cd5db6c0b4a8d2e4 as an option rather than a UI element in img2img tab.

AUTOMATIC1111 avatar Dec 24 '22 06:12 AUTOMATIC1111

I decided to implement this in https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/9441c28c947588d756e279a8cd5db6c0b4a8d2e4 as an option rather than a UI element in img2img tab.

I think that's okay

sena-nana avatar Dec 24 '22 10:12 sena-nana