llama.cpp
llama.cpp copied to clipboard
Looking for help for using llama.cpp with Phi3 model and LoRA
Recently, I have used qLoRA to fine tune Phi3-mini-4k-instruct model, and I have save the LoRA parameters. I plan to merge the lora layer onto the original model in Ollama. I start regularly with llama.cpp, particular, I use the Python script "convert-lora-ggml.py" for transformation of LoRA parameters so that it can be used in Ollama, but I have met the following ERROR:
INFO:lora-to-gguf:model.layers.0.mlp.down_proj => blk.0.ffn_down.weight.loraA (8192, 32) float32 1.00MB INFO:lora-to-gguf:model.layers.0.mlp.down_proj => blk.0.ffn_down.weight.loraB (3072, 32) float32 0.38MB INFO:lora-to-gguf:model.layers.0.mlp.gate_up_proj => blk.0.ffn_up.weight.loraA (3072, 32) float32 0.38MB INFO:lora-to-gguf:model.layers.0.mlp.gate_up_proj => blk.0.ffn_up.weight.loraB (16384, 32) float32 2.00MB ERROR:lora-to-gguf:Error: could not map tensor name base_model.model.model.layers.0.self_attn.qkv_proj.lora_A.weight ERROR:lora-to-gguf: Note: the arch parameter must be specified if the model is not llama
(By the way, I have applied the LoRA to the layers: qkv_proj", "gate_up_proj", "down_proj" of Phi3 model)
I will be grateful if someone can give me some suggestion on solving this issue, thanks in advance!
I find that the structure of Phi2 and Phi3 are named in different way, in Phi2, llama.cpp works fine on transforming LoRA weights to GGML (where the layer is named Wqkv) while in Phi3 this layer is named qkv_proj, I am thinking is this the problem for the failure of llama.cpp on transforming into GGML?
Any update on this? I am running into the same issue. LoRA runs correctly with transformers, but when I convert to llama cpp. It gives me non sense output.
hope this will be fixed as soon as possible
The reason is that llama.cpp treats phi3 as llama architecture, i.e., splitting the merged qkv_proj into q_proj, k_proj and v_proj layers. One way posted by @Raibows at https://github.com/vllm-project/vllm/issues/4715 is to convert the tensor weight of your adapter/lora checkpoint to match it where he gives the script https://gist.github.com/Raibows/079713a060f0c49c8f3b47c227aff722.
I have tested and it is successful for transforming LoRA weights into GGML, but there is another problem that Ollama cannot integrate this GGML LoRA weights back into Phi3-instruct, I think we should somehow merge back the LoRA weights...
anyone?
I have the same issue...
This issue was closed because it has been inactive for 14 days since being marked as stale.
@SHIMURA0 Hey! Did you manage to resolve this issue? I’m facing the same problem:
''' INFO:lora-to-gguf:model.layers.0.mlp.down_proj => blk.0.ffn_down.weight.loraA (17920, 16) float32 1.09MB INFO:lora-to-gguf:model.layers.0.mlp.down_proj => blk.0.ffn_down.weight.loraB (5120, 16) float32 0.31MB INFO:lora-to-gguf:model.layers.0.mlp.gate_up_proj => blk.0.ffn_up.weight.loraA (5120, 16) float32 0.31MB INFO:lora-to-gguf:model.layers.0.mlp.gate_up_proj => blk.0.ffn_up.weight.loraB (35840, 16) float32 2.19MB INFO:lora-to-gguf:model.layers.0.self_attn.o_proj => blk.0.attn_output.weight.loraA (5120, 16) float32 0.31MB INFO:lora-to-gguf:model.layers.0.self_attn.o_proj => blk.0.attn_output.weight.loraB (5120, 16) float32 0.31MB ERROR:lora-to-gguf:Error: could not map tensor name base_model.model.model.layers.0.self_attn.qkv_proj.lora_A.weight ERROR:lora-to-gguf: Note: the arch parameter must be specified if the model is not llama '''