wanda
wanda copied to clipboard
Results After LoRA Fine-Tuning
The perplexity of the LLAMA-7B model fine-tuned using the 'script.sh' from the 'lora_ft' file differs significantly from the results presented in your paper. Could you kindly advise if there might be an issue with the fine-tuning code?