alpaca-lora icon indicating copy to clipboard operation
alpaca-lora copied to clipboard

There are some issues with fine-tuning here

Open Lufffya opened this issue 1 year ago • 8 comments

1, Unable to reproduce the ability to write code using official Lora image

2, There were some issues with the model and data fine-tuning using this command line

python finetune.py \
    --base_model 'decapoda-research/llama-7b-hf' \
    --data_path 'yahma/alpaca-cleaned' \
    --output_dir './lora-alpaca'

Output carry instruction mark: image

Repeated input instructions: image

Output some incomprehensible content: image

I guess the model training may have broken down Whether I should change some hyperparameter when fine-tuning, instead of fine-tuning in the simplest way by default, but I don't know where to start

Lufffya avatar Apr 21 '23 09:04 Lufffya

Check my test.py in PR, it can explain all these cases.

lywinged avatar Apr 21 '23 09:04 lywinged

Check my test.py in PR, it can explain all these cases.

Okay, thank you very much. May I ask how you resolved this situation

Lufffya avatar Apr 21 '23 09:04 Lufffya

Check my test.py in PR, it can explain all these cases.

Okay, thank you very much. May I ask how you resolved this situation

I read all the issues both closed and open, and the solution are listed there.

lywinged avatar Apr 21 '23 09:04 lywinged

@lywinged Okay, I observed the training and testing examples by running python test.py, But I still don't understand what's going on. Sorry

Lufffya avatar Apr 21 '23 10:04 Lufffya

Maybe because model decapoda-research/llama-7b-hf has bad tokenizer. Try the one from finetune doc, there were some issues with llama tokenizer reported on huggingface

Oxi84 avatar Apr 22 '23 15:04 Oxi84

I had this issue earlier and figured it was due to some issue saving the final model: adapter_model.bin

ls -l
-rw-r--r-- 1 root root      443 Apr 21 19:29 adapter_model.bin

Note the file size is only 443.

I manually copy the checkpoint pytorch_model.bin and use that.

ls -l
-rw-r--r-- 1 root root 67201357 Apr 26 18:36 adapter_model.bin

kuan-cresta avatar Apr 27 '23 19:04 kuan-cresta

@kuan-cresta , your method is correct! Thank!

WoodScene avatar May 06 '23 03:05 WoodScene

Thanks for sharing the solution, @kuan-cresta! Copying the checkpoint binary solved a similar problem for me too

paolorechia avatar May 13 '23 11:05 paolorechia

Even though I replaced adapter_model.bin with a checkpoint binary as @kuan-cresta mentioned, there have been some improvements, but the same issues persist. Do you have any more suggestions?

su-park avatar May 18 '23 01:05 su-park