peft icon indicating copy to clipboard operation
peft copied to clipboard

Chunk adapter for fused qkv_proj

Open yxli2123 opened this issue 1 year ago • 1 comments
trafficstars

As microsoft/Phi-3-mini-4k-instruct is using fused qkv_proj and up_gate_proj, LoRA adapters should be applied to each chunk instead of the entire fused matrix.

The change lies in (1) adapter initialization and (2) forward computing.

For (1), we create n_chunk adapters for one fused matrix, default-chunk-0, default-chunk-1, ... for both lora_A and lora_B.

For (2), we iterate the adapter by adapter_name. Once we find its chunk, we only update the associated dense result.

We have run the make style and make quality.

yxli2123 avatar May 03 '24 03:05 yxli2123

Thanks for providing this PR. Indeed, as of now, we don't have any special provision for LoRA to deal with fused QKV layers (or any fused layers). For some models that used fused QKV, we just apply the LoRA adapter on the whole fused layer. I haven't thought this through completely, but I wonder if this should not be good enough. Do we really need to chunk the LoRA adapter?

Where there is definitely a problem is if we have fused QKV but we want to apply LoRA only to Q and V, for instance. This is currently not possible, but AFAICT, your PR wouldn't change that situation, is that right?

BenjaminBossan avatar May 03 '24 13:05 BenjaminBossan

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

github-actions[bot] avatar Jun 02 '24 15:06 github-actions[bot]