prompt-to-prompt icon indicating copy to clipboard operation
prompt-to-prompt copied to clipboard

learned linear projections l_Q, l_K, l_V

Open cuixing61 opened this issue 2 years ago • 1 comments

Hi, this question is about the linear projections l_Q, l_K, l_V of the attention module in the paper Prompt-to-Prompt. The paper illustrated that the linear projections are learnable. However, in the introduction, it is claimed that "this method does not requires model training". The two expressions seem to contradict each other. How do you learn the papamaters of the l_Q, l_K, l_V?

cuixing61 avatar Mar 24 '23 02:03 cuixing61

In the attention module, l_Q, l_K, and l_V are designed to be learnable. However, since this paper utilizes pretrained models, there is no necessity to train them.

david20571015 avatar Aug 25 '23 16:08 david20571015