stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Feature] the blank part of PNG can be filled with white background
During my previous drawing with AI, I found that Novel would fill the blank part of the png image with white, while SD would fill it with black. After comparing and testing, using white fill the model would draw shadows on the blank part and sometimes other content, which is more natural and harmonious overall. While using black fill, the model will strictly follow the outline of the picture, sometimes it will look a little strange. So I added a button to img2img, which allows SD to pre-fill the blank part with white. Again, I added this option in Api, limited to img2img.
Translated with www.DeepL.com/Translator (free version)
This would help me immensely.
Great work, this will help me too!
Could you make it configurable, even if only in a json rather than the UI?
I've been manually trying to repair transparency in generated images. The workflow is process the image multiple times with black, grey, white, red, green, blue backgrounds and create a mask from that.
Having a textbox where I can put "change transparency to color: #hex [ ff0000 ] " would be great.
Until the models support transparency natively, you could automatically replicate my workflow by processing the image 6 times (with 000 black, fff white, 888 gray, f00 red, 0f0 green, 00f blue) and then create a mask layer from that, when the output image matches those colours > threshold A, e.g. 90% in > threshold B e.g. 5 out 6 cases.
Great work, this will help me too!
Could you make it configurable, even if only in a json rather than the UI?
I've been manually trying to repair transparency in generated images. The workflow is process the image multiple times with black, grey, white, red, green, blue backgrounds and create a mask from that.
Having a textbox where I can put "change transparency to color: #hex [ ff0000 ] " would be great.
Until the models support transparency natively, you could automatically replicate my workflow by processing the image 6 times (with 000 black, fff white, 888 gray, f00 red, 0f0 green, 00f blue) and then create a mask layer from that, when the output image matches those colours > threshold A, e.g. 90% in > threshold B e.g. 5 out 6 cases.
Now I changed the option to color picker, and I can customize the background color in both UI and API, but I don't know why the test failed, because all the tests passed properly locally If you want to automatically generate masks by color, I think this is probably beyond the scope of this feature, if I write a built-in tool I may need to open another pr.
I decided to implement this in https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/9441c28c947588d756e279a8cd5db6c0b4a8d2e4 as an option rather than a UI element in img2img tab.
I decided to implement this in https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/9441c28c947588d756e279a8cd5db6c0b4a8d2e4 as an option rather than a UI element in img2img tab.
I think that's okay