stable-diffusion.cpp icon indicating copy to clipboard operation
stable-diffusion.cpp copied to clipboard

Does sdcpp support wan-phantom?

Open KintCark opened this issue 2 months ago • 4 comments

I didn't know there was a wan 1.3b phantom model does sdcpp support it and echoshot 1.3b

KintCark avatar Nov 10 '25 18:11 KintCark

Y doesn't sdcpp support all wan loras I've tried some and they throw this error

[INFO ] model.cpp:2351 - loading tensors completed, taking 5.76s (process: 0.04s, read: 2.09s, memcpy: 0.00s, convert: 0.55s, copy_to_backend: 0.00s) [INFO ] stable-diffusion.cpp:681 - total params memory size = 7026.37MB (VRAM 0.00MB, RAM 7026.37MB): text_encoders 5592.45MB(RAM), diffusion_model 1191.82MB(RAM), vae 242.10MB(RAM), controlnet 0.00MB(VRAM), pmid 0.00MB(RAM) [INFO ] stable-diffusion.cpp:776 - running in FLOW mode [INFO ] stable-diffusion.cpp:2740 - generate_video 320x256x17 [INFO ] stable-diffusion.cpp:843 - Running with Simple schedule[INFO ] stable-diffusion.cpp:954 - attempting to apply 1 LoRAs [INFO ] model.cpp:1100 - load /root/stable-diffusion.cpp/build/models/wan-aesthetics-v1.safetensors using safetensors format [INFO ] lora.hpp:120 - loading LoRA from '/root/stable-diffusion.cpp/build/models/wan-aesthetics-v1.safetensors' |==================================================| 600/600 - 599999.94it/s [INFO ] model.cpp:2351 - loading tensors completed, taking 0.00s (process: 0.00s, read: 0.00s, memcpy: 0.00s, convert: 0.00s, copy_to_backend: 0.00s) |> | 6/600 - 6 |==================================================| 600/600 - 2985.07it/s [INFO ] model.cpp:2351 - loading tensors completed, taking 0.20s (process: 0.00s, read: 0.11s, memcpy: 0.00s, convert: 0.00s, copy_to_backend: 0.00s) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.0.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.1.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.10.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.11.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.12.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.13.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.14.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.15.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.16.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.17.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.18.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.19.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.2.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.20.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.21.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.22.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.23.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.24.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.25.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.26.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.27.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.28.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.29.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.3.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.4.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.cross_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.cross_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.cross_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.cross_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.ffn.0.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.ffn.0.lora_B.default.weight| (f32): shape(32, 8960, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.ffn.2.lora_A.default.weight| (f32): shape(8960, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.ffn.2.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.self_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.self_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.self_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.self_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.self_attn.q.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.self_attn.q.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.self_attn.v.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.5.self_attn.v.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.6.cross_attn.k.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.6.cross_attn.k.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.6.cross_attn.o.lora_A.default.weight| (f32): shape(1536, 32, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.6.cross_attn.o.lora_B.default.weight| (f32): shape(32, 1536, 1, 1) [WARN ] lora.hpp:860 - unused lora tensor |blocks.6.cross_attn.q.lora_A.default.weight|

KintCark avatar Nov 10 '25 19:11 KintCark

I need this aesthetics Lora by diffsynth-studio it fixes bad quality videos I'll be able to create videos at low resolution with this Lora

KintCark avatar Nov 10 '25 20:11 KintCark

It would indeed be cool if we could use diffusers wan models (blocks.0.ffn.2...).

Green-Sky avatar Nov 10 '25 20:11 Green-Sky

It would indeed be cool if we could use diffusers wan models (blocks.0.ffn.2...).

Yes wan phantom works

KintCark avatar Nov 11 '25 21:11 KintCark