sd-forge-layerdiffuse
sd-forge-layerdiffuse copied to clipboard
High-level instruction on how to use the LatentTransparencyOffsetEncoder model
Hi, Thanks again for this awesome tech. I see in several issues that Encoder support will come in the near future. Thank you for that. In the mean time, if I want to use LatentTransparencyOffsetEncoder and test a few things out, what's the expected input? From reading the decoder,
- seems like the input should be alpha then rgb. Is this correct?
- Are the input values [-1,1] or [0,1]
- From my simple testing of autoencoding, e.g. LatentTransparencyOffsetEncoder(alpha, RGB) + sdvae.encode( masked_rgb ) -> decode, it seems like not adding the offset performs better. Is this expected? Thanks again.
Me either, I am wondering how this latent offset Encoder should work.
3. LatentTransparencyOffsetEncoder(alpha, RGB) + sdvae.encode( masked_rgb ) -> decode, it seems like not adding the offset performs better. Is this expected? Thanks again.
Hello, have you figured out yet?
Hey people, see also updates:
https://github.com/layerdiffusion/sd-forge-layerdiffuse/issues/90#issuecomment-2156095009