llama-recipes
llama-recipes copied to clipboard
a question about video memory occupancy
🚀 The feature, motivation and pitch
i follow the instructions like
python llama_finetuning.py
--use_peft
--quantization
--model_name "meta-llama/Llama-2-7b-chat-hf"
--output_dir Path/to/save/PEFT/model
i wander if i do not use --use_peft(default lora) whether to start full fine-tuning
if so,i found lora and full fine-tuning have the same video memory occupancy,is this right
Alternatives
No response
Additional context
No response