Damian
Damian
As a Mac user I'm not at all a fan on the pinch to zoom gesture, it always feels awkward. My preference would be for the following: "Two finger flick"...
sorry, i'm not interested in taking this on.. hope you find someone though!
@Abhinay1997 @brucethemoose i've implemented block-weighted merging in `grate` , based on a modified version of checkpoint_merger. check it out here - (https://github.com/damian0815/grate/blob/main/src/sdgrate/checkpoint_merger_mbw.py) (or just `pip install sdgrate` and then run...
@Abhinay1997 i made a PR
this should be merged into main asap, even if that means frontend isn't using it. attention map collection is always-on in this PR but the memory/performance impact is negligible.
re: the prompt syntax. is this the way LORA's are going to be activated, as a prompt term? if so, i'd suggest a more explicit syntax in-line with the rest...
> Therefore a nice API that is both somewhat clean and flexible is to just let the user write "CrossAttentionProcessor" classes that are by default weights less and take (query_weight,...
> if all previously trained weights stay the same and if all previously trained weights are used @patrickvonplaten one thing we are looking at using is [LoRA](https://github.com/cloneofsimo/lora), which trains "the...
@andreaferretti i've successfully used the AttnProcessor api in InvokeAI - see for example [SlicedSwapCrossAttnProcesser](https://github.com/invoke-ai/InvokeAI/blob/ffe0e81ec9bf071d424dc9eef35368c681d5a294/ldm/models/diffusion/cross_attention_control.py#L567) which gets used [like this](https://github.com/invoke-ai/InvokeAI/blob/31146eb7975d3755d23b9819ff11b1a8275c25fe/ldm/models/diffusion/cross_attention_control.py#L354).
fwiw @Ephil012 Compel also supports long prompts as of v0.1.10 (released yesterday) which i'd expect makes the LPW pipeline pretty much redundant. as the maintainer of Compel i'm closely involved...