mLoRA icon indicating copy to clipboard operation
mLoRA copied to clipboard

Lora module splicing

Open FortuneBush opened this issue 1 year ago • 6 comments

Your work is outstanding. I would like to ask which key code modules are used for the assembly of Lora modules. Thank you for your reply. @yezhengmao1

FortuneBush avatar Nov 29 '24 02:11 FortuneBush

mlora/model/modules/lora.py

yezhengmao1 avatar Nov 29 '24 03:11 yezhengmao1

it seems that multi-lora modules are placed in a list without concatenating lora_a1 and lora_a2 into a new lora_A, and each group's lora_a and lora_b were separately extracted during each training. May I ask if this understanding is correct? Could you please introduce the technical code details of Lora splicing and uninstallation again Thank you for your reply!!! @yezhengmao1

FortuneBush avatar Dec 04 '24 01:12 FortuneBush

a1981b13571b7f3a330797e90f8ac04

more, I find during the task1 and task2 running they use the same pid. It seems that two Lora training tasks are not parallel, but serial training. In each layer of an iteration network, it seems that task A is trained first, followed by task B, and only enters the training of the next layer of the network after both tasks A and B have been trained.

May I ask if this understanding is correct?

Thank you for your reply!!! @yezhengmao1

FortuneBush avatar Dec 04 '24 01:12 FortuneBush

if you have one GPU, two tasks (task1 and task2), and set concurrency_num: 2:

mLoRA will concat task1 and task2's input to a large batch and train simultaneously (in only one process, so have same pid.).

also, if you set concurrency_num: 1, mLoRA will train task1 and task2 serial.

yezhengmao1 avatar Dec 04 '24 07:12 yezhengmao1

if you have one GPU, two tasks (task1 and task2), and set concurrency_num: 2:

mLoRA will concat task1 and task2's input to a large batch and train simultaneously (in only one process, so have same pid.).

also, if you set concurrency_num: 1, mLoRA will train task1 and task2 serial.

Can you point out the code that is used to create the two tasks input to a large batch? The code show it stores the two lora modules in a List, not a larger batch.

FortuneBush avatar Dec 04 '24 07:12 FortuneBush

tokens = torch.tensor(

any questions, you can contact me by WeChat: 18280486636

yezhengmao1 avatar Dec 04 '24 11:12 yezhengmao1