LoRA icon indicating copy to clipboard operation
LoRA copied to clipboard

How does MergedLinear work?

Open tspeterkim opened this issue 1 year ago • 7 comments

I understand why we need MergedLinear but is there a simple example of how the forward pass works for a MergedLinear? Specifically this line -> https://github.com/microsoft/LoRA/blob/main/loralib/layers.py#L248. I'm struggling to understand what the 1d conv is doing here.

I would also appreciate a mathematical explanation. For the Linear case, I understand the simple matrix multiplication of deltaW * x = B * A * x. But for MergedLinear, what would be the equation for deltaW?

tspeterkim avatar May 11 '23 03:05 tspeterkim

Hi Peter,

MergedLinear should do exactly what Linear does mathematically when multiple linear layers are "merged" into one like in the GPT codebase. It's here simply to make the GPT integration easier.

Hope this helps!

edwardjhu avatar May 15 '23 14:05 edwardjhu

Hello @peterkim95 I've added some annotation for LoRA code in lit-llama repo which you can find helpful.


Nevertheless I don't quite understand why there is a combination of Linear (for matrix A) and Conv1d (for B) layers. Why not both Linear or Conv1d. @edwardjhu could you briefly explain this or maybe link to an article to read. Because I have no idea 🤷‍♂️ . I have couple of suspicions that were described in my repo, but have a feeling that I'm not even near.

And by the way, thanks for your work 👍 .

Andrei-Aksionov avatar Jun 01 '23 17:06 Andrei-Aksionov

good issue

shipengai avatar Jun 19 '23 10:06 shipengai

For instance, if you wish to incorporate an 8-rank LoRA into the attention layer's 3 matrices (Q, K, V) within a model, you can utilize the following code:

lora_A = nn.Linear(in_features, 8 * 3, bias=False)
lora_B = nn.Conv1d(8 * 3, out_features, kernel_size=1, groups=3, bias=False)

If you choose to utilize nn.Linear in both A and B, it would be necessary to handle Q, K, and V separately. However, by employing nn.Conv1d and the group parameter, it becomes possible to process these three components simultaneously without any interference.

clalanliu avatar Jul 09 '23 08:07 clalanliu

Hello @clalanliu So as I understand with nn.Conv1d and groups parameter each part in the combined qkv matrix will be processed independently, while with nn.Linear lora_B matrix will "see" and process the whole combined matrix. Am I wrong?

And if so why this approach is not used for the lora_A?

Andrei-Aksionov avatar Jul 10 '23 12:07 Andrei-Aksionov

@Andrei-Aksionov Yes. You can check my note

And if so why this approach is not used for the lora_A?

There is no need to do so, because the input of QKV matrices are all the same (that is, x).

clalanliu avatar Jul 10 '23 13:07 clalanliu

There is no need to do so, because the input of QKV matrices are all the same (that is, x).

Oh boy, how did I miss that 🤣. Thanks

Andrei-Aksionov avatar Jul 10 '23 13:07 Andrei-Aksionov