AutoGPT
AutoGPT copied to clipboard
Port `autogpt.core.resource.model_provider` from AutoGPT to Forge
-
Actionable for #6970
Move clear-cut library code from AutoGPT to Forge (or to /dev/null if Forge already has a better version)
autogpt.core.resource.model_provider- ...
Proposed new module name: forge.llm
Dependencies
- #7000
- #7002
TODO
- Port
autogpt.core.resource.model_provider - Make single interface for client initialization/usage
- Check module configuration setup (see below)
Notes
-
Configuration may need revision We want Forge components to be portable and usable as stand-alone imports. Modules should be able to configure themselves if no configuration is passed in. Example:
OpenAI's constructor has anapi_keyparameter. If not set, it will try to read the API key from theOPENAI_API_KEYenvironment variable.Our
OpenAIProviderwraps anOpenAIorAzureOpenAIclient, depending on the configuration. We think it makes sense to preserve this behavior.
Why migrate this module?
The model_provider module provides functionality and extendability that is not available from any many-model client that we know of, e.g. LiteLLM. We would like to have support for as many models as possible, but:
- As it is, AutoGPT's prompts are not portable between different model families. Until this is fixed, having access to any number of LLMs / LLM providers doesn't add much value.
- We are eyeing some opportunities (developing LLM polyfills/middleware) for which having low-level access to the native clients is beneficial. Related: #6969.
Because of these reasons, we want to keep our own client implementation for now.
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.
Unstale @kcze