AutoGPT
AutoGPT copied to clipboard
Model temperature and token limit can now be set through the AI setup wizard
Background
There were a few TODO comments in main.py and chat.py requesting parameters like temperature and token limit to be easily adjustable and not hard-coded. This PR adds those settings into the setup wizard at the start of the script. Users may still elect to keep the defaults.
Changes
ai_config.py has 2 new parameters: ai_temperature and ai_token_limit. The file is updated to save/load them to/from the file and sets defaults if the constructor does not receive them. Defaults for them are stored in an instance dictionary in AIConfig class and can be easily adjusted if necessary.
The setup wizard in main.py is modified to allow those parameters to be set by the user. construct_prompt() is renamed to load_model_config() and is modified to return the AIConfig object. Prompt construction is moved to the main body.
chat_with_ai in chat.py is modified to receive AIConfig as a parameter instead of token_limit. This allows passing temperature and token limit to be passed together. The body is modified accordingly to use those values.
Documentation
All method documentation comments were adjusted to reflect the changes.
Test Plan
The functionality of the AutoGPT was not modified, so the tests only included the setup process with different inputs. Defaulting, saving/loading AI parameters, and running the model after setup was thoroughly tested.
PR Quality Checklist
- [V] My pull request is atomic and focuses on a single change.
- [V] I have thouroughly tested my changes with multiple different prompts.
- [V] I have considered potential risks and mitigations for my changes.
- [V] I have documented my changes clearly and comprehensively.
- [V] I have not snuck in any "extra" small tweaks changes
You overdone it by removing the final CRLF. I approve anyway but you may fix it to help us all
My bad, I've put it back
@GulkoA There are conflicts again
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
This is a mass message from the AutoGPT core team. Our apologies for the ongoing delay in processing PRs. This is because we are re-architecting the AutoGPT core!
For more details (and for infor on joining our Discord), please refer to: https://github.com/Significant-Gravitas/Auto-GPT/wiki/Architecting
FWIW, this looks potentially useful and is better than the current situation. So it would be great to see this reviewed/integrated. That being said, with all the talk about sub-agents and observers/self-feedback, I am wondering whether "temperature" should not rather be an agent specific setting - or at least so, that it can be overridden at the agent level.
Skimming through the sources -and not having looked at rearch related commits- that doesn't seem to be the case currently ?
Thoughts / ideas ?
This seems reasonable but needs updating pretty heavily. @GulkoA are you interested in updating this?
Also would love this to be hidden behind an advanced flag potentially
This seems reasonable but needs updating pretty heavily. @GulkoA are you interested in updating this?
Sure, I will look into it this weekend
We may still want to expose this stuff at runtime, it's one of the most recurring issues that people are running into: https://discord.com/channels/1092243196446249134/1092423060923101304/1115963736642035732
@GulkoA Is this still valid? If so, we're prepping for release v0.4.3. Please resolve conflicts and stand by as we merge.
Yes, sorry, when is the release?
Closing old PRS