ComfyUI_ExtraModels icon indicating copy to clipboard operation
ComfyUI_ExtraModels copied to clipboard

Issue with loading PixArt Sigma Loras

Open chrish-slingshot opened this issue 1 year ago • 18 comments

Hey all. I am getting issues when trying to load a Lora created in OneTrainer for PixArt Sigma. No matter what options I try/train the Lora with, I always get a load of warning messages in the ComfyUI console when it hits the PixArt Lora Loader node:

NOT LOADED diffusion_model.lora_transformer_transformer_blocks

The OneTrainer training run completes successfully, I've tried on various fp16/fp32/bf16 settings. Can anyone offer any guidance on this?

chrish-slingshot avatar Jun 11 '24 15:06 chrish-slingshot

Hi. If you don't mind, could you share one of the LoRA files so I can implement it?

The current implementation is largely based on the peft LoRA from the example training script.

city96 avatar Jun 11 '24 20:06 city96

Will do, thanks! I'm just in the middle of a training run but I'll upload a file as soon as possible.

chrish-slingshot avatar Jun 11 '24 20:06 chrish-slingshot

Example workflow.

lora_example.json

https://www.dropbox.com/scl/fi/43p5ym172sn65hlsm3wdu/test_lora.safetensors?rlkey=tr3dkoqfbjgvqsbdkufkxsl0x&st=gq809msv&dl=0

These are the warning messages:

NOT LOADED diffusion_model.lora_te_encoder_block_0_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_0_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_0_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_0_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_0_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_0_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_10_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_10_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_10_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_10_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_10_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_10_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_10_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_11_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_11_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_11_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_11_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_11_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_11_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_11_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_12_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_12_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_12_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_12_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_12_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_12_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_12_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_13_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_13_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_13_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_13_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_13_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_13_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_13_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_14_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_14_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_14_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_14_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_14_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_14_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_14_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_15_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_15_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_15_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_15_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_15_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_15_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_15_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_16_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_16_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_16_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_16_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_16_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_16_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_16_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_17_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_17_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_17_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_17_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_17_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_17_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_17_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_18_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_18_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_18_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_18_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_18_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_18_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_18_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_19_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_19_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_19_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_19_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_19_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_19_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_19_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_1_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_1_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_1_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_1_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_1_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_1_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_1_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_20_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_20_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_20_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_20_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_20_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_20_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_20_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_21_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_21_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_21_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_21_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_21_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_21_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_21_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_22_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_22_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_22_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_22_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_22_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_22_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_22_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_23_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_23_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_23_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_23_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_23_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_23_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_23_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_2_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_2_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_2_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_2_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_2_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_2_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_2_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_3_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_3_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_3_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_3_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_3_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_3_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_3_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_4_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_4_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_4_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_4_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_4_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_4_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_4_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_5_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_5_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_5_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_5_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_5_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_5_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_5_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_6_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_6_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_6_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_6_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_6_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_6_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_6_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_7_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_7_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_7_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_7_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_7_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_7_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_7_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_8_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_8_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_8_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_8_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_8_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_8_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_8_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_te_encoder_block_9_layer_0_SelfAttention_k.weight
NOT LOADED diffusion_model.lora_te_encoder_block_9_layer_0_SelfAttention_o.weight
NOT LOADED diffusion_model.lora_te_encoder_block_9_layer_0_SelfAttention_q.weight
NOT LOADED diffusion_model.lora_te_encoder_block_9_layer_0_SelfAttention_v.weight
NOT LOADED diffusion_model.lora_te_encoder_block_9_layer_1_DenseReluDense_wi_0.weight
NOT LOADED diffusion_model.lora_te_encoder_block_9_layer_1_DenseReluDense_wi_1.weight
NOT LOADED diffusion_model.lora_te_encoder_block_9_layer_1_DenseReluDense_wo.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_0_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_0_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_0_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_0_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_0_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_0_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_0_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_0_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_10_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_10_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_10_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_10_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_10_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_10_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_10_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_10_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_11_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_11_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_11_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_11_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_11_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_11_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_11_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_11_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_12_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_12_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_12_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_12_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_12_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_12_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_12_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_12_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_13_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_13_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_13_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_13_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_13_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_13_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_13_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_13_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_14_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_14_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_14_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_14_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_14_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_14_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_14_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_14_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_15_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_15_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_15_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_15_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_15_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_15_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_15_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_15_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_16_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_16_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_16_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_16_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_16_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_16_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_16_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_16_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_17_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_17_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_17_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_17_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_17_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_17_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_17_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_17_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_18_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_18_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_18_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_18_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_18_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_18_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_18_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_18_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_19_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_19_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_19_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_19_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_19_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_19_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_19_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_19_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_1_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_1_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_1_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_1_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_1_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_1_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_1_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_1_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_20_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_20_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_20_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_20_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_20_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_20_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_20_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_20_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_21_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_21_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_21_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_21_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_21_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_21_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_21_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_21_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_22_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_22_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_22_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_22_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_22_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_22_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_22_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_22_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_23_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_23_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_23_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_23_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_23_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_23_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_23_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_23_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_24_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_24_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_24_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_24_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_24_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_24_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_24_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_24_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_25_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_25_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_25_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_25_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_25_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_25_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_25_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_25_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_26_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_26_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_26_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_26_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_26_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_26_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_26_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_26_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_27_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_27_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_27_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_27_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_27_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_27_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_27_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_27_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_2_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_2_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_2_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_2_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_2_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_2_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_2_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_2_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_3_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_3_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_3_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_3_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_3_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_3_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_3_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_3_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_4_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_4_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_4_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_4_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_4_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_4_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_4_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_4_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_5_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_5_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_5_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_5_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_5_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_5_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_5_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_5_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_6_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_6_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_6_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_6_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_6_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_6_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_6_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_6_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_7_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_7_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_7_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_7_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_7_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_7_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_7_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_7_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_8_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_8_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_8_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_8_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_8_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_8_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_8_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_8_attn2_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_9_attn1_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_9_attn1_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_9_attn1_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_9_attn1_to_v.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_9_attn2_to_k.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_9_attn2_to_out_0.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_9_attn2_to_q.weight
NOT LOADED diffusion_model.lora_transformer_transformer_blocks_9_attn2_to_v.weight```

chrish-slingshot avatar Jun 11 '24 20:06 chrish-slingshot

No clue if that fixes it but I made it match the correct keys. It still has a bunch of "missing keys" listed but looks like OneTrained just doesn't train those?

image

Also, I can't really realistically load the T5 part until I switch over to the SD3 code for that, so I'd turn off training the text encoder until then.

city96 avatar Jun 11 '24 23:06 city96

Yeah I've not been training the text encoder as it's too big.

I'm afraid I get an error now running that example workflow I attached:


'EXM_PixArt_ModelPatcher' object has no attribute 'model_keys'

File "S:\StableDiffusion\UI\ComfyUI_2024_06_06\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\StableDiffusion\UI\ComfyUI_2024_06_06\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\StableDiffusion\UI\ComfyUI_2024_06_06\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\StableDiffusion\UI\ComfyUI_2024_06_06\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\nodes.py", line 91, in load_lora
model_lora = load_pixart_lora(model, lora, lora_path, strength,)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\StableDiffusion\UI\ComfyUI_2024_06_06\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\lora.py", line 137, in load_pixart_lora
k = new_modelpatcher.add_patches(loaded, strength)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\StableDiffusion\UI\ComfyUI_2024_06_06\ComfyUI\comfy\model_patcher.py", line 214, in add_patches
if k in self.model_keys:
^^^^^^^^^^^^^^^```

chrish-slingshot avatar Jun 12 '24 08:06 chrish-slingshot

EDIT: Removing the contents of this post as it turns out it was unrelated. Above issue still stands.

chrish-slingshot avatar Jun 12 '24 09:06 chrish-slingshot

I have a similar issue, using a LoRa trained with OneTrainer. Here is what I get when I try to use it: image

My LoRa is a bit borked so i cannot confirm whether it is working, but it's odd that this message shows up. Might still be working fine?

ejektaflex avatar Jun 17 '24 02:06 ejektaflex

I didn't see this issue also, here is a lora file to test this. https://www.dropbox.com/scl/fi/30y9yn26ao8pnwch7z1ex/test_lora.zip?rlkey=r6kvgzwvrqm9tnw4jz8ctgu2f&st=zrnmlw0n&dl=0

frutiemax92 avatar Jul 01 '24 23:07 frutiemax92

I trained a pixart sigma 512 MS one which isn't influencing the generation in comfy, but I tested sampling it in one trainer and it is working there

Without lora (OT)

pixartSigmaXL2512MS_loraoff

With (OT)

pixartSigmaXL2512MS_loraon

Without lora (comfy)

comfyLoraOff

With (comfy) (no change in generation)

comfyLoraOnNoEffect

comfy cli

comfyNoEffectCli

boricuapab avatar Jul 22 '24 07:07 boricuapab

Did you upgrade your extra nodes recently?

frutiemax92 avatar Jul 22 '24 14:07 frutiemax92

I have, I still cannot get a PixArt LoRa to have any meaningful effect like it did in OneTrainer.

ejektaflex avatar Jul 22 '24 14:07 ejektaflex

Same issue here. Looks like none of the keys in the lora are correctly being matched.

Dug a little - among other things, the helper methods get_depth and get_lora_depth both returning zero.

chrisgoringe avatar Jul 27 '24 00:07 chrisgoringe

The amount of convoluted jank and technical debt in this repo is staggering, and I should really just rewrite the entire LoRA loading logic instead of having half the key conversion be hardcoded.

Anyway, could you try it now? Pushed a fix.

image

city96 avatar Jul 27 '24 15:07 city96

Will try it later today. Thanks!

If you want a collaborator to help with refactoring the code, lmk

chrisgoringe avatar Jul 27 '24 23:07 chrisgoringe

I've switched over to training diffusers loras for the sigma line of models using the sigma lora repo, here's an example of one for sigma 900M which is working inside of comfy

https://civitai.com/models/610726/pocket-creatures-sigma-900m

boricuapab avatar Jul 30 '24 07:07 boricuapab

@boricuapab Nice! Would you mind sharing your training config?

chrish-slingshot avatar Jul 30 '24 07:07 chrish-slingshot

I don't have a one trainer training config for it, I'm training them using this repo

https://github.com/PixArt-alpha/PixArt-sigma

boricuapab avatar Jul 30 '24 15:07 boricuapab

I've also trained this dreambooth lora with this script: https://github.com/PixArt-alpha/PixArt-sigma/blob/master/train_scripts/train_dreambooth_lora.py

frutiemax92 avatar Jul 30 '24 22:07 frutiemax92