Feature Request: Add prompt templates
Hi,
First of all, thank you for all the work on this.
Many instruction following models are much more effective when a prompt template is used (which is normally prefaced to the actual prompts. Many models give vastly different responses based on the prompting. An instruction following template for example (Alpaca):
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Prompt goes here
### Response:
It would be great to see a feature where prompt templates can be specified:
- In the CLI by template name/alias
- Globally (?) for all models (maybe a true/false setting as well as default prompt template setting)
- Per model configuration
And allow the user to create the prompt templates, or even include some default ones.
I really think that mods can benefit significantly with an integrated prompt library management process. So I created two Ruby gems prompt_manager and aia The prompt_manager provides management of a library of parameterized prompts. The parameters in a prompt file are indicated as "[UPPERCASE TEXT]" The user is queried for replacement values for these parameters. This is idendical to the concept of a prompt template.
I wrote aia as a front-end to mods It uses my prompt_manager to construct prompts that are then sent to mods on the backend for processing.
I've been using aia with mods for almost a month. It clearly demonstrates to me that mods can benefit significantly from having a prompt library templated management system. Consider this as a potential interface:
stuff | mods prompt_id < context.txt
where "prompt_id" references the base name in a prompts library directory. If the contents of the prompt_id.txt file contains parameters then mods would query the user for replacement text for each parameter.
I really think this kind of feature would be a significant enhancement to mods on the front end.
What I'm failing to grasp right now is, where can we configure the type of message? i.e. openai respecting a variety of message types:
Simplification of the concept seems to be in order. mods accepts prompt text from three different routes: piped, as a command line parameter and as an insert via "< file.txt" This is great flexibility. I suspect, but do not know for sure, that all three routes are treated as user message which is basically how I treat the stored prompts that my aia tool which front-ends mods is treating them. aia takes the content of a prompt file, snatizes the text that provides that text to mods as a command line parameter. When multiple context files are provided to aia they are piped into mods. e.g. cat file1 file2 file3 | mods "user message" where the user message is obtained from the content of a text file.
Lets say that you have an environment variable $MODS_PROMPTS_DIR that points to a directory of text files. Suppose also that it has sub-directories under which the text files (I call them prompt files) are categorized by the user in whatever organization that makes sense to them. You could have one sub-directory named roles. Then mods could have a --role dev option that loads the content of $MODS_PROMPTS_DIR/roles/dev.txt as a system message. Where we call "dev" the prompt ID and $MODS_PROMPTS_DIR/roles/dev.txt is a prompt file - in this case the prompt file is specialized to give the GPT the context under which it is to frame its responses i.e. its role.
You can also have an option --id refactor that loads the content of the prompt file $MODS_PROMPTS_DIR/refactor.txt as a user message.
This allows for example the use of a commands like these:
mods --role go_dev --id refactor < my_go_source.go
mods --role ruby_dev --id refactor < my_ruby_source.rb
rather than a command like this:
mods "... some really long verbose user prompt ..." < my_go_source.go
My ruby gem aia currently does all of this and more as a front end to mods and like other front eds that I've written will go away as the primary tool includes the functionality that the front end has demonstrated.
fwiw, we added roles in recent commits, not sure how much of this problem it solves, but I thought you gonna want to play with it 🙏
thanks @caarlos0 pretty sure this solves the thing I said a few comments above
Now we just need to have the mentioned Roles bit in the README and explain the three different roles (system, assistant, user?). Or at least link off to somewhere that does. Glad to see this added
Yeah, I need to proper document it!
But, basically, you add the roles and the system prompts to your mods config, e.g.:
mods --settings
excerpt:
roles:
"default": []
"duude":
- you are a classical dude from a high school movie
"king":
- pretend you are a king from the medieval era
"go-dev":
- you are a senior go developer who loves terminal tools and the simplicity of it
- you are usually very succinct in your responses
- you also say "i don't know" when you're not certain about something
and then you can mods --role go-dev write me a hello world.
Basically, the prompt list is passed down to the api as the system role (which seems to be it's use case), and the user input is given as user role. The replies from the api come in the assistant role. There's also a function role which is not yet supported.
Basically, the "role" in mods is not a role in the openai api, it's just a "configuration prompt" for sorts.
Hope that clarifies, and happy to answer any more questions.
I believe this is handled by roles now. Closing, feel free to reopen if needed.
Thanks!