alpaca-lora
alpaca-lora copied to clipboard
Is the generation_config important during LoRa fine-tuning?
Is the generation_config important during LoRa fine-tuning? If we aim for inference at a high temperature setting (e.g., 0.9), is it advisable to train at the same setting, or should we maintain it at a lower value?