alpaca-lora
alpaca-lora copied to clipboard
got same answer as pre-trained base model
Needing help!
I finetuned an alpaca-lora using the author's code with my own dataset (with 4K instructions, input and output) as a json file. After re-training, the created weight size is 67MB.
My problem: I got the same answer as in pre-trained alpaca-loraI!
Also I got unrelated answer for custom instruction that I have in my own dataset.
Does anyone have an idea why it doesn’t work correctly? may I update any line of fine-tune or generate python file?
For fine-tuning I used the system with single GPU RTX4090.
Thanks in advance!
Maybe you can check if the LoRA weight you loaded is 0. Have encountered this problem once.