extract_lora does not work because module keys don't match any SD1.x models
Hey again.
Edit: I see it works for SD2.x models so I guess the SD1.x keys are not the same and need to be added. Is it SpatialTransformer that's missing?
I was attempting to try out the extract_lora_from_models.py but realized that UNET_TARGET_REPLACE_MODULE = ["Transformer2DModel", "Attention"] never matches any layers in any models I throw at it, so the result is always create LoRA for U-Net: 0 modules., and an empty output file.
Are these the correct keys for SD1.x models?
Hi, I think you may use the older version of Diffusers. If the version is older than 0.10.2, please update to 0.10.2.
Seems like a reboot fixed it. I'm able to extract now. However the results seems to be way lower than I would expect. If the original model was a 0 and the tuned model were a 10, the result of extraction is about a 3. How close should the extracted model be to the tuned model?
I closed this not realizing I was testing with the wrong model. An extracted LoRA is still not able to match the outputs of a trained model. No matter the dim used. And I don't mean small differences. You need 200% or more strength to get the extracted LoRA to even come close.
This may just be the expected outcome since a Dreambooth model trains more layers than LoRA extracts, and if this is the case feel free to just close this ticket.
Was fixed at some point