multidiffusion-upscaler-for-automatic1111
multidiffusion-upscaler-for-automatic1111 copied to clipboard
Doesn't seem to work with refiner
Self-explanatory. It's all good when you use it on normal generation, it works fine with upscaler, all things SDXL, but when you enable refiner old "GPU's VRAM gets deepfried", you were looking for to never see again, returns once more
It is quite sad considering that some models DO need refiner to produce good/better images
As far as I know, the refiner part is dynamically loaded/off-loaded in the runtime, here sd-webui is awkward to notify all extensions what has been changed, so we have no chance to hook the newly loaded stuff, causing the final crash :( Sorry for it
In previous sd webui versions I've used Dynamic Thresholding after running noise inversion without it while in Multidiffusion mode. That's allowed me to activate a refiner model. I"m don't use SDXL however. So I'm adding a regular SD-1.5 model instead. And actually switching to a "refiner" model has often largely reduced the it/s once it loads. Dynamic Thresholding also seems to work as intended when it's enabled too. Even though it will display an error in the console.
I wonder if I could train a copy of a model with https://rockeycoss.github.io/spo.github.io/ added in to make a faux SD-1.5 refiner model. I tested just using the Lora itself in normal generation and noticed I significantly like the outputs when delaying it's active time steps by ~10%. (using Lora Block Weight or loractl). I have no clue if it's at all intended to be used that way but my tests did look a lot better with the Lora.