stanford_alpaca
stanford_alpaca copied to clipboard
why dose my finetuned model repeat the given prompt before generating its response
Million thanks to your great work! Could you help me with my problem?
After I finetune my llama model, when I prompt it with text like "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n ### Instruction: {my_instruct} ### Input: {my_input} ###Response:", the model does not only output the response part as I expect, it outputs the prompt above and then its response.
my dataset looks like:
my environment is: