LLaMA-Adapter
LLaMA-Adapter copied to clipboard
Proper comparison between adapter-tuning, lora-tuning, prompt-tuning, and prefix-tuning?
Parameter-efficient methods have been studied extensively in (small or large) language models since BERT. Given those abundant prior works, why is there no controlled experiment comparing those different methods in the paper?