llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Process escape sequences given in prompts

Open DannyDaemonic opened this issue 1 year ago • 0 comments

In Linux we can use a bashism to inject newlines into the prompt:

./main -m models/7B/ggml-model.bin -n -1 --color -r "User:" --in-prefix " " --prompt $'User: Hi\nAI: Hello. I am an AI chatbot. Would you like to talk?\nUser: Sure!\nAI: What would you like to talk about?\nUser:'

But in Windows there's simply no way. This patch processes the escape sequence given in prompts.

In Linux and on Macs, it allows us a cleaner way of doing it:

./main -m models/7B/ggml-model.bin -n -1 --color -r "User:" --in-prefix " " --prompt 'User: Hi\nAI: Hello. I am an AI chatbot. Would you like to talk?\nUser: Sure!\nAI: What would you like to talk about?\nUser:'

And makes the same thing possible in Windows:

main.exe -m models\7B\ggml-model.bin -n -1 --color -r "User:" --in-prefix " " --prompt "User: Hi\nAI: Hello. I am an AI chatbot. Would you like to talk?\nUser: Sure!\nAI: What would you like to talk about?\nUser:"

DannyDaemonic avatar Apr 25 '23 14:04 DannyDaemonic