ColossalAI
ColossalAI copied to clipboard
[BUG]: The LLama model trained in Lora mode is unable to perform normal reasoning
🐛 Describe the bug
In this path 'ColossalAI/applications/Chat/examples/train_sft.sh', LLama-7B model is trained with Lora training method, but there is a problem in the reasoning process, is it because Lora parameters are not loaded? May I ask how to solve it?

Environment
Python 3.9.16 torch 1.12.1 torchaudio 0.12.1 torchvision 0.13.1 transformers 4.28.0.dev0
Hi @tianbuwei Thanks for the feedback, we are fixing LoRA bug. #3439