ComfyUI
ComfyUI copied to clipboard
About latent composition.
Hello. I am trying to combine two images using latent composition. Specifically, I want both of their noises to influence the same area. However, I don't know the function of X, Y and Feather. Using 0,0,80 (copying an example), I was able to get it to show me the result of image noise 1, but none of noise 2, or vice versa (by switching sample from and sample to) Would you be so kind to guide me? I think it's somewhat doable according to the examples, but I'm not figuring out how.
LatentComposite copies the image from sample_from and pastes it on top of samples_to. The x, y are coordinates for where to paste the image aka the offset.
x=0, and y=0 are the top left of the image
feather is to feather the edges a number of pixels and blend them with the other image to try to keep things seamless.
Thanks. I'm half dumb and I don't speak English. Now I understand. The coordinates are fine, but since it's an overlay instead of a blend, you'll only see one. Also feather only activates when one of the edges is separated, and in side orientation. In short, it's not what I was looking for, since I wanted to mix the information from both images in latent space, like controlnet + img2img does. As for that, using the latent composition with feather + controlnet I have achieved that image A acquires the illumination of image B, and thanks to controlnet it preserves its basic appearance. While this vaguely does one of the things I'm looking for, it would be very rudimentary, since using any editor and changing the opacity would do a much better job of creating the reference blend. Unfortunately, I don't understand the mechanism by which controlnet blends in so well, on the downside that it can only use maps and not keep the original image. Well, I'll keep groping.😌
So you want a node that can average two latent images?
1 - I want the effect of controlnet+img2img, where img2img influences the final composition organically. Same thing, but combining two img2img and not looking like a crude pastiche miles from what the first method achieves. 2 - (Since asking is free), I would like to be able to do the same but using the method of the img2img alternative test script. That would be much more interesting, since there the image is reconstructed from scratch, implying that it could be manipulated during the inversion from the root. Being able to influence that reconstruction with a second image is an idea that I am impatient to know if it is possible.
Of course, if you can create such a node, it would open the door for interesting experiments.💪
Can I ask a question regarding this: do you think it would be possible to apply matrix transformation when combining the images? i.e. instead of pasting an image with a certain offset, you map pixel a1 to pixel b1, pixel a2 to b2, etc. where a_i and b_i are defined by matrix A?
(it would enable giving certain parts of an image a rotation)
I want the effect of controlnet+img2img, where img2img influences the final composition organically. Same thing, but combining two img2img and not looking like a crude pastiche miles from what the first method achieves.
How img2img works is that the initial image has noise added to it before being denoised so I'm not sure how two img2img would be combined like controlnets are. Averaging the two images before adding the noise most likely wouldn't give you the results you want.
Can I ask a question regarding this: do you think it would be possible to apply matrix transformation when combining the images?
Yes. There already is a node for latent rotation though.
Yeah. I think I get it. Controlnet creates a latent image from control (map) parameters, which is why it can blend in so well with the noise from Img2Img. While I see uses for two Img2Imgs in various contexts, it's a more limited resource. That is why I am so interested in the proposal of an inverter, such as Img2Img alternative test. Is it possible to implement? I have tested it a lot and the inversion is accurate, as well as fast. So much potential there.
I want the effect of controlnet+img2img, where img2img influences the final composition organically. Same thing, but combining two img2img and not looking like a crude pastiche miles from what the first method achieves.
How img2img works is that the initial image has noise added to it before being denoised so I'm not sure how two img2img would be combined like controlnets are. Averaging the two images before adding the noise most likely wouldn't give you the results you want.
Can I ask a question regarding this: do you think it would be possible to apply matrix transformation when combining the images?
Yes. There already is a node for latent rotation though.
Thanks. I tried the latent rotation but it seems like rotating in latent space gives problems when decoding & sampling more after rotation 'undoes' the rotation or yields highly surrealistic images (life hack, ha, ha). I'm trying to get it to work for 3D texturing - given a 3D model, each camera angle has a specific 3x2 matrix to render the object in 2D. So you can calculate the 2x2 matrices which translate pixels from one perspective onto the other, apply in a loop et voila (?).