stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
Implementing lora-ctl with webui-forge
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
I'd like to get the sd-webui-loractl extension working with webui-forge. It works with the latest Automatic1111 commit, so I'm guessing one of the more central changes here is making it incompatible.
Currently, it looks like the injected ExtraNetworkLora wrapper params are being inserted (it passes a dummy 1.0), but the new logic isn't being triggered.
I'm open to forking (or rewriting) it myself, but I'd appreciate any guidance on where to start.
Proposed workflow
- Install extension
- Activate in WebUI
- Use <lora:lora_name:0.0@0,1.0@1> syntax
Additional information
No response
I agree, this is a much needed extenstion.
by any chance do you use civitai helper extention and is that working for you in this fork?
Sorry, I don't use that extension.
second that feature request
I would also love to see this extension working in this, It's so underrated. It's really awesome being able to control at what step a too strong lora kicks in and then kicks out, or mix loras however you want. Here was a recent post that covered it https://old.reddit.com/r/StableDiffusion/comments/1aqlvi0/psa_dont_ignore_the_sdwebuiloractl_extension_for/
Hi there. I'm the author of loractl. I've been out of the SD scene for a bit and don't have bandwidth to adapt loractl to Forge, but it's fundamentally a monkeypatch on A1111's network handling code. It's doing two specific things:
- First, it registers itself as the handler for the
loraextra network when it's enabled. This enables it to take over parsing for those<lora:...>blocks. A1111 automatically feeds them to the extension. - Second, it patches the network.Network class by hijacking its
te_multiplierandunet_multiplierproperties, and replacing them with properties which actually call a function in loractl to compute the te/unet multipliers (the lora weight) instead. Those functions consider the current generation step as a part of the value they return.
A1111, on each step, checks to see which extra networks are in play, and at what weights. Fudging the te_multiplier/unet_multiplier weights has the effect of causing A1111 to unapply the previous lora weights and reapply new ones at the new weights given. Forge appears to do something similar which means that the concept should work, in theory! (Edit: After looking at this, it looks like Forge only attempts to set up networks once per image, rather than once per step, so loractl as it's implemented in A1111 would not work)
All the heavy lifting is done by A1111 - all loractl is really doing is changing the te_multiplier and unet_multiplier values on a per-step basis, which just happens to make A1111 do the right thing.
From a quick gander at the Forge network handling, te/unet weights are calculated during network activation, but rather than being taken from overridable properties, they're taken directly from the params parameter passed to activate. To make the loractl concept work, ExtraNetworkLora#activate would have to be monkeypatched with a replacement implentation which would parse the params list with the multi-step parser (that could be lifted directly from loractl's utils) then compute the te/unet weights based on the keyframes parsed from the params, the current step, and the total number of desired steps, like loractl does here.
In theory it shouldn't be too difficult, but it would probably need to be a fork of loractl, since it would be targeting a fundamentally different codebase, and loractl is tiny enough that a little copying is better than a little dependency.
If the authors of Forge wanted to support it, based on a 3-minute review of the code, I think that just directly patching the ExtraNetworkLora#activate method to understand and use the extended syntax should be sufficient to get loractl-style functionality in Forge. It'd certainly be a lot cleaner than trying to patch it in with an extension.
thanks a lot. we will take a look soon about lora system
@lllyasviel Thank you!!! This is absolutely needed!!! It's amazing being able to force the initial generation to place the character first before the LoRa does its work. It's even better when paired with BREAK keywords!
Thanks a lot! Will be very handy to have dynamic weighting in LORA's in Forge!
I second that!
I third this! :D
A1111, on each step, checks to see which extra networks are in play, and at what weights. Fudging the te_multiplier/unet_multiplier weights has the effect of causing A1111 to unapply the previous lora weights and reapply new ones at the new weights given.
@cheald Is this computationally expensive?
Yes, it's why loractl tanks it/s during periods that lora weights are changing step by step.
Any news on the matter? this extension would so awesome
thanks a lot. we will take a look soon about lora system
@lllyasviel Hello - I see you are very busy, with very important things!
You flagged this issue as "High Priority" - I just wanted to remind you :)
I don't know if this has any hope to fit into your plans, but it would be so nice.
Keep up the amazing work! It is very exciting to have you back