Phil Wang
Phil Wang
@Leo-T-Zang share a weights and bias report?
just turn off flash attention
`flash_attn = False` on your Unet
yea sure, or maybe it can be generalized to any CLIP model? it can be any modality to any modality after all
@lhaippp maybe generalize it to any model, with a text CLIP as one example would be a great contribution!
@lhaippp looking forward to it!
@houxianxu ohh yea, i think someone else contributed the guided implementation. i'll look up who it is later and ping him or her
@sztoo yea i'm trying to figure that out too, but i don't see why it should not work out of the box for VAE latent embedding space, as it is...
@orhunutkuaydin wow, circle of willis! very cool 🚀 🙏
@harnvo totally missed this issue, thanks for identifying it!