Make prompt parameters configurable
Loading prompt constraints, resources and performance evaluation from a yaml file (default prompt_settings.yaml) Can be set from .env (PROMPT_SETTINGS_FILE) or by commandline (--prompt-settings or -P)
Background
The main reason for proposed this changes is because they can help with with different LLM models, we talked about that in #25 #567 #2158. They don't handle the prompts in the same way as GPT3.5/GPT4, and they often get confused. this way can be easy to create and share prompts made specifically for them.
Also, it can be useful for models made for different languages, like for example https://github.com/FreedomIntelligence/LLMZoo
....Or just for people who want to have more control on what AutoGPT can do without having to modify the code
Changes
Moved the hardcoded default prompt constraints, resources and performance evaluation form prompt.py to the new file prompt_settings.yaml, wich will be used as default file.
Added the new configuration variable PROMPT_SETTINGS_FILE in .env.template and modified config.py to handle it
Added the new file autogpt/config/prompt_config.py, wich contain the PromptConfig class, wich is initialized passing the file path and contains the datas from the configuration file
Moved the hardcoded default prompt constraints, resources and performance evaluation form prompt.py to the new file prompt_settings.yaml, wich will be used as default file.
Modified prompt.py to use the values from the PromptConfig instead of hardcoded datas
Modified cli.py, main.py and configurator.py to handle the new --prompt-settings / -P commandline args
Documentation
The new .env variable PROMPT_SETTINGS_FILE is described there, while the new --prompt-settings/-P comd line args are described both in cli.py and in usage.md. I followed the same policy used for the ai.settings.yaml file
Test Plan
- Start AutoGPT without modifying any configurations, should work just as before
- Start AutoGPT using --prompt-settings (file) (ex. python -m autogpt -P prompt_settings_ex.yaml), where file doesn't exists or isn't valid. AutoGPT should give a validation error and stop
- Start AutoGPT using after setting PROMPT_SETTINGS_FILE=(file), where file doesn't exists or isn't valid. AutoGPT should give a validation error and stop
- Copy the prompt_settings.yaml file and change it a bit, while keeping it still valid. Run AutoGPT normally, it should still run as expected
To check if the prompt have actually changed, i also used those changes in my fork (see #2594) while connecting to https://github.com/keldenl/gpt-llama.cpp I know it isn't something officially supported, but it's still a good and quick way to see what's going on, since the webservice prints the prompt on the standard output
PR Quality Checklist
-
- [x] My pull request is atomic and focuses on a single change.
-
- [x] I have thoroughly tested my changes with multiple different prompts.
-
- [x] I have considered potential risks and mitigations for my changes.
-
- [x] I have documented my changes clearly and comprehensively.
-
- [x] I have not snuck in any "extra" small tweaks changes
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Ignored Deployment
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| docs | ⬜️ Ignored (Inspect) | Visit Preview | May 17, 2023 5:07pm |
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This is a mass message from the AutoGPT core team. Our apologies for the ongoing delay in processing PRs. This is because we are re-architecting the AutoGPT core!
For more details (and for infor on joining our Discord), please refer to: https://github.com/Significant-Gravitas/Auto-GPT/wiki/Architecting
Please see
- #3954
@Boostrix i really like the idea. And my PR pretty much already covers it
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
Codecov Report
Patch coverage: 63.15% and project coverage change: -0.21 :warning:
Comparison is base (
1c399e6) 62.82% compared to head (c378ab5) 62.61%.
:exclamation: Current head c378ab5 differs from pull request most recent head 70bf7cd. Consider uploading reports for the commit 70bf7cd to get more accurate results
Additional details and impacted files
@@ Coverage Diff @@
## master #3375 +/- ##
==========================================
- Coverage 62.82% 62.61% -0.21%
==========================================
Files 73 74 +1
Lines 3373 3392 +19
Branches 487 494 +7
==========================================
+ Hits 2119 2124 +5
- Misses 1107 1120 +13
- Partials 147 148 +1
| Impacted Files | Coverage Δ | |
|---|---|---|
| autogpt/cli.py | 0.00% <0.00%> (ø) |
|
| autogpt/configurator.py | 0.00% <0.00%> (ø) |
|
| autogpt/main.py | 0.00% <ø> (ø) |
|
| autogpt/config/prompt_config.py | 78.94% <78.94%> (ø) |
|
| autogpt/config/config.py | 75.00% <100.00%> (+0.14%) |
:arrow_up: |
| autogpt/prompts/prompt.py | 46.80% <100.00%> (-5.12%) |
:arrow_down: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size
Deployment failed with the following error:
Resource is limited - try again in 2 hours (more than 100, code: "api-deployments-free-per-day").
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size
Would this also handle the current issue where the GPT4 default fails for people who only have access to GPT3.5 (e.g. #4229) ? Or more generally: how does this deal with multiple LLMs as part of a single profile ?
Deployment failed with the following error:
Resource is limited - try again in 1 hour (more than 100, code: "api-deployments-free-per-day").
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size
Deployment failed with the following error:
Resource is limited - try again in 51 minutes (more than 100, code: "api-deployments-free-per-day").
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size
Would this also handle the current issue where the GPT4 default fails for people who only have access to GPT3.5 (e.g. #4229) ? Or more generally: how does this deal with multiple LLMs as part of a single profile ?
I've not really clear what are you referring to, but it doesn't seem related
sry, you're right, I was meaning to respond in the issue where someone is working on abstracting out the OpenAI API
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size