stable-diffusion.cpp icon indicating copy to clipboard operation
stable-diffusion.cpp copied to clipboard

Crushing when apply this lora

Open razvanab opened this issue 1 year ago • 10 comments

The app is crushing with no error when I apply this LORA:

https://civitai.com/models/251417

sd -m "D:\Stable-diffusion\ComfyUI\models\checkpoints\SDXL\himerosxl_v206.safetensors" --lora-model-dir D:\Stable-diffusion\ComfyUI\models\loras\SDXL --taesd "D:\Stable-diffusion\ComfyUI\models\vae\taesd\vae_taesdxl.safetensors" --clip-skip 2 --seed -1 --steps 8 --cfg-scale 1.0 -H 768 -W 768 -p " a woman, best eyes, freckle, full body shoot, looking at the viewer ,perfecteyes. <lora:MJ52_v2.0:1>, <lora:PerfectEyesXL:1>, <lora:pcm_sdxl_smallcfg_8step_converted:1>" --negative-prompt "lowres, hair over eyes, out of frame, messy drawing, amateur drawing, ugly face, bad face, bad teeth, (interlocked fingers, extra fingers, badly drawn hands and fingers, anatomically incorrect hands, bad anatomy), blurry, sketch, painting, simple background, white background"

razvanab avatar Sep 27 '24 13:09 razvanab

It could be related to https://github.com/leejet/stable-diffusion.cpp/issues/370

Do you get lots of logs looking like this?

[WARN ] lora.hpp:176  - unused lora tensor transformer.single_transformer_blocks.20.norm.linear.lora_A.weight
[WARN ] lora.hpp:176  - unused lora tensor transformer.single_transformer_blocks.20.norm.linear.lora_B.weight

stduhpf avatar Sep 27 '24 15:09 stduhpf

No, I don't get any error messages. Now I do; for some reason, there was no error earlier.

[WARN ] lora.hpp:176  - unused lora tensor lora.model_diffusion_model_output_blocks_2_1_conv.alpha
[WARN ] lora.hpp:176  - unused lora tensor lora.model_diffusion_model_output_blocks_2_1_conv.lora_down.weight
[WARN ] lora.hpp:176  - unused lora tensor lora.model_diffusion_model_output_blocks_2_1_conv.lora_up.weight
[WARN ] lora.hpp:176  - unused lora tensor model.diffusion_model_output_blocks_2_2_conv.alpha
[WARN ] lora.hpp:176  - unused lora tensor model.diffusion_model_output_blocks_5_2_conv.alpha
[WARN ] lora.hpp:186  - Only (2633 / 2638) LoRA tensors have been applied
[WARN ] lora.hpp:176  - unused lora tensor lora.model_diffusion_model_output_blocks_2_1_conv.alpha
[WARN ] lora.hpp:176  - unused lora tensor lora.model_diffusion_model_output_blocks_2_1_conv.lora_down.weight
[WARN ] lora.hpp:176  - unused lora tensor lora.model_diffusion_model_output_blocks_2_1_conv.lora_up.weight
[WARN ] lora.hpp:176  - unused lora tensor model.diffusion_model_output_blocks_2_2_conv.alpha
[WARN ] lora.hpp:176  - unused lora tensor model.diffusion_model_output_blocks_5_2_conv.alpha
[WARN ] lora.hpp:186  - Only (2633 / 2638) LoRA tensors have been applied

razvanab avatar Sep 27 '24 17:09 razvanab

I also have issue https://github.com/leejet/stable-diffusion.cpp/issues/418

geocine avatar Sep 28 '24 18:09 geocine

If you roll back to an earlier checkpoint does the issue persist? Be sure to regenerate the ggml submodule if you do so.

grauho avatar Sep 29 '24 14:09 grauho

Be sure to regenerate the ggml submodule if you do so. How do i do this ?

razvanab avatar Sep 29 '24 14:09 razvanab

Be sure to regenerate the ggml submodule if you do so. How do i do this ?

After you checkout the earlier checkpoint that you want to try you just run: git submodule init git submodule update

before building with cmake, same as if you were to build from a fresh sdcpp repo. This should rebuild ggml to whatever version was associated with sdcpp at that checkpoint.

grauho avatar Sep 29 '24 14:09 grauho

I tried with a couple other checkpoints, and it's the same.  Now I decide to build from a fresh sdcpp repo and see how that goes.

razvanab avatar Sep 29 '24 14:09 razvanab

I'm wondering if the issue is instead with the LoRA, because the vast majority of tensors are loading from it correctly and it might just contain some orphaned or duplicate alpha weights. I'm not surprised the only lora_up/lora_down failure is output_block_2_1_conv because if I recall that is a hard coded special case in lora.hpp but for what reason its that way I do not know.

grauho avatar Sep 29 '24 15:09 grauho

I also have issue #418

I believe your issue is distinct from this one, Flux LoRAs are still in a bit of a state at the moment due to lack of good documentation on the various naming conventions that are being used by the new training programs.

grauho avatar Sep 29 '24 15:09 grauho

I compiled the app, and it still does the same thing. After all, the problem could be that Lora.

razvanab avatar Sep 29 '24 17:09 razvanab

This should be fixed.

leejet avatar Nov 13 '25 15:11 leejet

Thank you, sir.

razvanab avatar Nov 14 '25 18:11 razvanab