gadiluna

Results 1 issues of gadiluna

When running the finetuning with peft with the command: `python finetune_peft.py --model_path ../../LLaMAHF/llama-7b/ --dataset_path ../../tokenizedinstruct/ --peft_mode lora --lora_rank 8 --per_device_train_batch_size 2 --gradient_accumulation_steps 1 --max_steps 30000 --learning_rate 2e-4 --fp16 --logging_steps 100...