stable.art
stable.art copied to clipboard
[Question] Is it possible to generate with transparent background using this plugin?
I know that Stable Diffusion doesn't know yet what alpha chanel is and can't make transparency. But considering that it's a photoshop plugin, maybe it can now? So is it possible to generate with transparent background using this plugin? Or maybe will be possible?
stable.art is just a front-end for the model, so it can do what the model allows it to. The model cannot do what you're asking, like you said. The next best thing is specifying in the prompt that you want a solid white background (maybe even a solid green background that you can chroma-key out later).
In fact, if you, for instance, import an image with transparency data into the webui for img2img, inpainting, etc., transparency data will be converted to a solid color (white by default).
I just hoped that Photoshop could use some of it's own tricks in generations. :-) I hope SD will soon add transparency, as I heard they work this idea out.
Perhaps the plugin could query a script to automate the process of removing a background but results would vary depending on the colors/content of the generated image. Remember, Photoshop itself isn't doing anything more than displaying the generation's results since everything happens in the webui/backend.
Perhaps the plugin could query a script to automate the process of removing a background but results would vary depending on the colors/content of the generated image. Remember, Photoshop itself isn't doing anything more than displaying the generation's results since everything happens in the webui/backend.
Nah, it would be low quality, Same as just remove that background with magic eraser. Well, we'll wait until SD will learn the alpha chanel.
We'll wait a very long time then.
The diffusion model works by adding then removing noise from an image, attempting to create objects out of the resulting information for every specified step. For txt2img, the starting image is random color noise. There's no way for the model to determine what will become transparent during the first stage, so it makes more sense to manually remove the background after the generation is finished.
If there is no color information e.g. if there is transparency, the model cannot create anything out of it.
maybe even a solid green background that you can chroma-key out later
Btw as long as you mentioned it. I know that it is possible to remove the chroma-key, but is it more precise than any other color? I saw some videos where people clear the image from the green but they did the same as they would do with any other color. Just determine a color that will be deleted, what so special about the green and blue chromakey? As I know there is no "magic" tool that will remove the green ideally clean. You will clean the edges anyway whether you removed green or any other color.
Just determine a color that will be deleted.
Chroma-keying is a little more involved than simply selecting a specific color and then deleting it. It's not a magic wand-like tool; during the process, the keying tools look for color information throughout the whole image based on a specified color range.
what so special about the green and blue chromakey?
Nothing, except green screens and blue screens (especially green screens) are popular because the subject is less likely to contain the same or similar color information from the background that will be keyed out. In digital applications where alpha channel is not supported, magenta (#ff00ff) as a background is very popular.
As I know there is no "magic" tool that will remove the green ideally clean. You will clean the edges anyway whether you removed green or any other color.
Ideally you want the subject to be very sharp against the background to be keyed out. The stable diffusion model is capable of producing very sharp images of subjects against specified solid color backgrounds, such that in some cases no tweaking of the keying mask would be required. Even still, in a lot of cases tweaking may still be necessary because AI isn't perfect.
All in all, the keying quality will depend on the quality of the image itself. You can't polish a turd. No "magic" tool will help you there.
So I guess there is not much difference of what color will be the background of the generated image. Green or yellow or white it will depend on how sharp edges do your needed image has. :-D I guess that's the only way, same old magic eraser and simple eraser "finetune". :-)
I don't think you're understanding me. Chroma keying process is not comparable to using a magic eraser or erasing the background by hand. It looks for color information. If for instance you generated an image of a character with a yellow background and you attempt to chroma key the yellow out of the image, then the yellow background will become transparent but the character as well will be affected because the colors of the character likely have some yellow mixed into them (the skin, for instance). You'll end up with a character with translucent skin.
How sharp the subject is against its background still matters when chroma keying, however, because while chroma keying usually leads to cleaner results than the magic wand/magic eraser, if the edges of the subject feather into the background, then after keying the background out, the resulting edges won't be as sharp.
https://www.youtube.com/watch?v=g02dGma2ehU This is a video explanation of what chroma keying is. It is widely used in video compositing but can be used for still pictures as well. The woman is standing behind a green screen, and the green color is keyed out of the entire image. Because her skin, shirt, hair, etc. barely have any green mixed into them, they're largely unaffected, and no green edges are visible (except when the motion blurs), which is what you would expect if they simply used a tool like the magic eraser/magic wand for every single frame.
Anyway, this conversation is going off-topic from the original post. You already have your answer.