ChaoyuHuang

Results 2 issues of ChaoyuHuang

when i use alpaca data to fine-tuning llama13B on 4*A100 80GB GPU, i got the following erroes: ``` RuntimeError: Error(s) in loading state_dict for LLaMA: size mismatch for lm_head.weight: copying...

when i was run: python -m torch.distributed.run --nproc_per_node=2 longchat/train/fine_tune/train.py --model_name_or_path /mnt/yuchao/open_model/longchat/longchat-13b-16k --data_path /mnt/workspace/sft_data.json --bf16 --output_dir /mnt/yuchao/yuchao/longchat-13b-16k --num_train_epochs 3 --per_device_train_batch_size 1 --per_device_eval_batch_size 4 --gradient_accumulation_steps 1 --evaluation_strategy no --save_strategy steps --save_steps 1000...