unify-parameter-efficient-tuning icon indicating copy to clipboard operation
unify-parameter-efficient-tuning copied to clipboard

The instantiation of Multi-head PA and the design choice of MAM adapter.

Open JacobYuan7 opened this issue 2 years ago • 3 comments

Thanks for your great work! I have read your paper, but I am a bit confused about two things.

(1) The instantiation of Multi-head PA. How can we instantiate Multi-head PA (r=30) to make it have the same quantity of tuned parameters as PA (attn, r=30) according to Table 4 in the main paper? My initial thought is that Multi-head PA's tuned parameters will be N_h times those of PA.

(2) The design choice of MAM adapter. According to my understanding, MH PA (attn, r = 30) is slightly better than prefix tuning (l = 30) based on the result in Table 4 (35.3>35.2), and according to previous papers like LoRA, prefix tuning is not stable to optimize. However, MAM adopts prefix tuning. Is there a specific reason for this?

Would you mind giving me any clues about these two questions?

JacobYuan7 avatar Jul 03 '22 07:07 JacobYuan7

Thanks for your interest! For your questions:

  1. MH PA and PA use the same number of parameters when their r are the same -- in transformers the attention output on each head is of dim d/N_h while the entire attention output is of dim d

  2. Optimizing MH PA (attn) is similarly difficult to prefix tuning (while PA is much more stable), the number 35.3 and 35.2 are not really that different. Therefore, there is no specific reasons to adopt prefix tuning in MAM, actually I recall that adopting MH PA in MAM could give similar performance

jxhe avatar Jul 07 '22 08:07 jxhe

Thanks for your interest! For your questions:

  1. MH PA and PA use the same number of parameters when their r are the same -- in transformers the attention output on each head is of dim d/N_h while the entire attention output is of dim d
  2. Optimizing MH PA (attn) is similarly difficult to prefix tuning (while PA is much more stable), the number 35.3 and 35.2 are not really that different. Therefore, there is no specific reasons to adopt prefix tuning in MAM, actually I recall that adopting MH PA in MAM could give similar performance

Thanks for your reply! But I am still a bit confused about question 1.

For PA, we have parameters of size 2*d*r. For MH PA, since the design is parallel to attention module, Adapters take in x of dimension d as input. Then, we have parameters of size (d*r+r*d/N_h)*N_h, which is not equal to the above term.

Correct me if I am wrong in the calculation. Thanks!

JacobYuan7 avatar Jul 09 '22 10:07 JacobYuan7

Hi, sorry to getting back so late! (I am kinda in a post-graduation vacation mode recently......)

Back to your questions, in our implementation the input x in MH PA is practically xW_q on each head to be exactly comparable to prefix tuning, thus the #parameters is 2*d*r

jxhe avatar Jul 28 '22 15:07 jxhe