alpaca-lora icon indicating copy to clipboard operation
alpaca-lora copied to clipboard

got same answer as pre-trained base model

Open amirOXip opened this issue 1 year ago • 1 comments

Needing help!

I finetuned an alpaca-lora using the author's code with my own dataset (with 4K instructions, input and output) as a json file. After re-training, the created weight size is 67MB.

My problem: I got the same answer as in pre-trained alpaca-loraI!

Also I got unrelated answer for custom instruction that I have in my own dataset.

Does anyone have an idea why it doesn’t work correctly? may I update any line of fine-tune or generate python file?

For fine-tuning I used the system with single GPU RTX4090.

Thanks in advance!

amirOXip avatar Aug 08 '23 06:08 amirOXip

Maybe you can check if the LoRA weight you loaded is 0. Have encountered this problem once.

TianweiXing avatar Aug 12 '23 06:08 TianweiXing