Lora module splicing
Your work is outstanding. I would like to ask which key code modules are used for the assembly of Lora modules. Thank you for your reply. @yezhengmao1
mlora/model/modules/lora.py
it seems that multi-lora modules are placed in a list without concatenating lora_a1 and lora_a2 into a new lora_A, and each group's lora_a and lora_b were separately extracted during each training. May I ask if this understanding is correct? Could you please introduce the technical code details of Lora splicing and uninstallation again Thank you for your reply!!! @yezhengmao1
more, I find during the task1 and task2 running they use the same pid. It seems that two Lora training tasks are not parallel, but serial training. In each layer of an iteration network, it seems that task A is trained first, followed by task B, and only enters the training of the next layer of the network after both tasks A and B have been trained.
May I ask if this understanding is correct?
Thank you for your reply!!! @yezhengmao1
if you have one GPU, two tasks (task1 and task2), and set concurrency_num: 2:
mLoRA will concat task1 and task2's input to a large batch and train simultaneously (in only one process, so have same pid.).
also, if you set concurrency_num: 1, mLoRA will train task1 and task2 serial.
if you have one GPU, two tasks (task1 and task2), and set
concurrency_num: 2:mLoRA will concat task1 and task2's input to a large batch and train simultaneously (in only one process, so have same pid.).
also, if you set
concurrency_num: 1, mLoRA will train task1 and task2 serial.
Can you point out the code that is used to create the two tasks input to a large batch? The code show it stores the two lora modules in a List, not a larger batch.