alpaca-lora icon indicating copy to clipboard operation
alpaca-lora copied to clipboard

All adapter_model.bin is the same

Open paulthewineguy opened this issue 1 year ago • 2 comments

I have noticed that when I fine-tune my model with different epochs, the resulting model parameters are consistently the same, and the file size remains at 443 bytes. Is this expected behavior, or am I missing something in the fine-tuning process?

paulthewineguy avatar Sep 10 '23 13:09 paulthewineguy

Hello, I also encountered the same problem, have you solved it?

fst813 avatar Sep 27 '23 12:09 fst813

Same here, I followed the documentation and ran (3 epochs):

python finetune.py \
    --base_model 'decapoda-research/llama-7b-hf' \
    --data_path 'yahma/alpaca-cleaned' \
    --output_dir './lora-alpaca'

I also ran the following

python finetune.py \
    --base_model='decapoda-research/llama-7b-hf' \
    --num_epochs=10 \
    --cutoff_len=512 \
    --group_by_length \
    --output_dir='./lora-alpaca-512-qkvo' \
    --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
    --lora_r=16 \
    --micro_batch_size=8

The resulting adapter_model.bin files in the checkpoint folders are always 443 bytes, which is different from https://huggingface.co/tloen/alpaca-lora-7b/tree/main .

I saw my loss is 0 and eval loss is nan. Maybe it is related to #418


Trying the tricks mentioned in #293

vifi2021 avatar Oct 25 '23 17:10 vifi2021