AutoGPT
AutoGPT copied to clipboard
DRAFT : PromptStrategy can use Individual ChatModelProviders & set own configuration (llm model, temperature, top_k, top_p...)
PromptStrategy
can use individual ChatModelProviders
& set own configuration (llm model, temperature, top_k, top_p,...).
Overview
- This PR introduce a
ChatModelWrapper
that can Wrap differentChatModelProviders
(including OpenAI), new providers can be created for Gemini, Mixtra... - This PR modify
PromptManager
to create theChatModelWrapper
object ; PromptManager is now an intermediary between aPromptStrategy
aChatModelWrapper
. -
PromptStrategy.build_promp()
still return aChatPrompt
; which data is channeled via an objectChatCompletionKwargs
to structure the data - Dependency are injected between
PromptStrategy
↔️ChatModelWrapper
↔️ChatModelProviders
- Introduce
AbstractChatMessage
so each provider can have it's own roles & messages -
LanguageModelFunction
(via dependency injection),ChatModelResponse
(via extension of an interface) & Argument passed to the LLM will be formatted as theChatModelProviders
specify it thus enabling different providers & API.
Remaining work
- DONE : The "lib" works well, it comes from a fork that contains +40k line change with master
-
TODO : Integration (and would like to team up for them as I have no time)
-
PrompStrategy
&PromptManager
had an AgentMixin that adds methods such as set_agent() and _agent , they will need to be added - Imports needs to be fixed
- poetry dependency might need update
- Langchain code can be removed with no issues 😃
-
More info :
- Tested under 3.12
- Might need to be downgraded to Pydantic < 2.0.0 (mainly revert model_dump() to dict(), revert the model_config to BaseModel.Config )
- ChatMessages can't be generated
OpenAIChatMessage
(evolution of AGPT ChatMessage for OpenAI (should not work for Gemini)) or viaLangChain
. Choice is left to the developper for now however the Langchain dependency has not been added to the poetry. - All adapters have to extends
AbstractChatModelAdapter
& implement thechat()
method. In the chat method, any client can be used starting with OpenAI client (commented out in the file) or Langchain (if a langchain dependency is added to the poetry.lock )
New functionalities :
- The lib allows
tool_choice
to select a specific tool (for better guidance of the LLM and new use cases) - The lib introduce a mechanism where the absence of
tool_call
can trigger a new attempts (was useful as GPT3.5 tend not to call functions). This mechanism offer the possibility to force a specific tool (viatool_choice
) after X (default to 3) failed attempts - The lib offer
Jinja2
integration to build prompt (requires poetry update)
⚠️ Regressions :
- The lib removes
has_oa_tool_calls_api
/has_function_call_api
and deem providers must provide an function_call API in 2024 (Ollama, Gemini, Mixtra do...) - The lib erase last week change with tool_id made by pwut :(
- The lib erase the new retry system made by pwut :(
- 🛑 The lib doesn't support Embeddings provider anymore as context is given above. I made a similar wrapper to handle various LLM Providers & made use of Langchain which I understand can be an philosophical issue for some.
⚠️ Behaviour changes :
- If only 1 tool is provided this tool is automatically called as it is
Considerations :
-
ChatPrompt
might integrateChatCompletionKwargs
- LLM Models are hardcoded in the adapter => Might want to configure it another way such as in the .env
- LLM Models versions are not pinned
- Remove : **model_configuration_dict => I need to make robust unit tests before I touch it !
Disclaimer :
Can't share unit tests at the moment, but I have shared the run I have on my branch in discord
e Not ported to autogpt, but very close to the finish line.
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR.
Deploy Preview for auto-gpt-docs ready!
Name | Link |
---|---|
Latest commit | 02b9d7418512033a5ed6d21a2652e6404de231b8 |
Latest deploy log | https://app.netlify.com/sites/auto-gpt-docs/deploys/65d9cbba51602c000802b3ea |
Deploy Preview | https://deploy-preview-6898--auto-gpt-docs.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR.
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR.
Related:
- #6969
Actionables
[...]
- [ ] Amend PromptStrategy class to allow specifying compatible models in order of preference
- [ ] Amend PromptStrategy class (or subclasses) to allow customizing the prompt based on the available model(s)
I am loving it so far, it sort of works like version 0.4.1 before function calling was a thing. If you want to we can pair program on this.
I am loving it so far, it sort of works like version 0.4.1 before function calling was a thing.
If you want to we can pair program on this.
Hey @Wladastic ! You are welcome to help me on the last straight line !
If import are fixed, the PR would be fully functional. It's code that is running for over a month on a fork.
However, the fork has 30 000+ line changes. I have not tried to integrate it. I have just made a bunch of ctrl+F to fix the import but did not bother to run the agent (as there are +30 000 line difference).
I hope most import are right and I have not forgot any important file. If any, hit me up and I would add the missing file.
I would say to close the PR , we need :
- To fix import
- may be move file here and there
- may be add a missing file here and there
- write tests
It's spring and and I will be gardening on my spare time, not coding 🙁
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Has had conflicts since March and no updates.
Let me know if you are still working on this, and I can reopen
Hi,
The lib is fully functional & works on a fork ( I do not have time to maintain), it's really a very minor effort to have it integrated. Most likely fix imports.
Pwut said it wasn't a matter of "if" but "when" .
Pierre
Le ven. 28 juin 2024 à 23:53, Swifty @.***> a écrit :
Has had conflicts since March and no updates.
Let me know if you are still working on this, and I can reopen
— Reply to this email directly, view it on GitHub https://github.com/Significant-Gravitas/AutoGPT/pull/6898#issuecomment-2197705730, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACKCQ5BN56YTQSWNPUVJP3ZJXLN3AVCNFSM6AAAAABDXDNDYGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJXG4YDKNZTGA . You are receiving this because you authored the thread.Message ID: @.***>