intrinsic-lora
intrinsic-lora copied to clipboard
Question: could a textual inversion be made to get these models to express these concepts?
If a text to image generator internally has a particular concept, would it follow that all they need to generate an output showing that concept is an appropriate input requesting it? For instance could you use textual inversion to make an input embedding to communicate "output only what you perceive to be albedo"