Yanan Li
Yanan Li
Thanks for your quick answer @Aguin. I found that the original author also wrote this way, which is somewhat different from the general prediction problem (generally `n_pred` future slots).
> Hi, Can I ask what kind of adapter are you using and the command used to run finetune.py? I want to use prefix tuning and the command used to...
> For prefix-tuning, I can run the command with transformers==4.35. The later versions will cause different errors, for example [huggingface/peft#1252](https://github.com/huggingface/peft/pull/1252). Please try with transformers==4.35. Thanks for your quick reply. It...
Given the MMLU performance referenced in the Llama2 paper, I believe the results in Table 20 reflect a 5-shot scenario, while LLM-Adapters' performance is primarily zero-shot.