AutoGPT
AutoGPT copied to clipboard
How can I integrate the third-party API interface?
Duplicates
- [X] I have searched the existing issues
Summary 💡
I would like to switch the interface of Auto-GPT from the official OpenAI to a third-party interface, such as https://openai.api2d.net/. How to proceed with the change?
Examples 🌈
None
Motivation 🔦
free using
See the following searches:
- https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue+is%3Aopen+offline
- https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue+is%3Aopen+vicuna
See the following searches:
- https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue+is%3Aopen+offline
- https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue+is%3Aopen+vicuna
I would like to modify the interface so that it can be invoked in a manner similar to the JavaScript code below. What steps do I need to take to modify the code? Thank you.
const https = require('https');
var postData = JSON.stringify({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello, what is your name?' }],
});
var options = {
hostname: 'openai.api2d.net',
port: 443,
path: '/v1/chat/completions',
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: 'Bearer fkxxxxx', // Forward Key, This is a third-party API interface.
},
};
var req = https.request(options, (res) => {
console.log('statusCode:', res.statusCode);
console.log('headers:', res.headers);
res.on('data', (d) => {
process.stdout.write(d);
});
});
req.on('error', (e) => {
console.error(e);
});
req.write(postData);
req.end();
personally, my suggestion would be add a new command to configure the API based on a json template and then pass that in - that way, people could tinker with different APIs, and as long as the json contains some comments/meta info, the agent could also tinker with variations of it (e.g. by fetching the API schema first).
See: #3651
OK,i will have a try.
First, see: #2594
note that you can probably take existing code snippets you may already have (like the javascript code above) and adapt it as needed with a little guidance from GPT itself.
Also, if this is possibly supposed to be reviewed/integrated at some point, it would make sense to use the equivalent of a "profile" (python dict with key/value pairs) to generalize the current config. That way, you woud introduce profiles for different API interfaces.
This could work analogous to the "model" parameter (see llm_utils.py) If there are more changes needed to accomodate a different API, you will want to come up with an APIProvider abstraction and sub-class its interface.
If in doubt, consider joining discord to discuss the details. But with these hints (and some help from GPT) you can probably come up with a basic prototype rather quickly.
And before you spend more than 15 minutes coding up something like this, please do consider using a draft PR and announce your work on discord, so that others are aware of your effort - which can probably save you a ton of time. Realistically, a simple prototype should not take longer than 1 hour to draft. If you are finding yourself needing more time, Discord is your friend.
#2594 makes possible to connect AutoGPT to any third party service wich is compliant with OpenAI's API, even if the underlying model is different. There are already many projects wich do that, like https://github.com/keldenl/gpt-llama.cpp, https://github.com/go-skynet/LocalAI and even the openai plugin of https://github.com/oobabooga/text-generation-webui That API seems like to be openai-compliant, so that PR should be just enough.
Connecting to a non-compliant api would be a different matter. But i'm not sure that adding an "APIProvider" would be the right way, probably that should be handled by a plugin. and before making a PR (or even a draft) it would be a good idea to discuss about that in discord
#2594 makes possible to connect AutoGPT to any third party service wich is compliant with OpenAI's API, even if the underlying model is different. There are already many projects wich do that, like https://github.com/keldenl/gpt-llama.cpp, https://github.com/go-skynet/LocalAI and even the openai plugin of https://github.com/oobabooga/text-generation-webui That API seems like to be openai-compliant, so that PR should be just enough.
Connecting to a non-compliant api would be a different matter. But i'm not sure that adding an "APIProvider" would be the right way, probably that should be handled by a plugin. and before making a PR (or even a draft) it would be a good idea to discuss about that in discord
Therefore, if my third-party interface adheres to the guidelines set forth by OpenAI, may I employ it, per chance? I express my gratitude for your consideration.
Therefore, if my third-party interface adheres to the guidelines set forth by OpenAI, may I employ it, per chance? I express my gratitude for your consideration.
Precisely. Your API should expose the same method offered by OpenAI's API, and then it should work just fine.
Note: There's still an open discussion about that PR and it may be implemented differently
Therefore, if my third-party interface adheres to the guidelines set forth by OpenAI, may I employ it, per chance? I express my gratitude for your consideration.
Precisely. Your API should expose the same method offered by OpenAI's API, and then it should work just fine.
Note: There's still an open discussion about that PR and it may be implemented differently
So,I have perused your PR, wherein I appended the aforementioned
elif os.getenv("OPENAI_API_BASE_URL", None):
openai.api_base = os.getenv("OPENAI_API_BASE_URL")
to the file config.py located within the directory autogpt\config. Subsequently, is it sufficient to include OPENAI_API_BASE_URL=xxx in the .env file? I am a novice in GitHub, unfamiliar with its usage.
to the file
config.pylocated within the directoryautogpt\config. Subsequently, is it sufficient to includeOPENAI_API_BASE_URL=xxxin the .env file? I am a novice in GitHub, unfamiliar with its usage.
Indeed, an you'll have also to set EMBED_DIM according to the model you are using. In the config file you'll find some examples for llama and derivates. Also, you'll have to use my fork if you want to try that. The code has been just proposed, but not yet merged to the main project. https://github.com/DGdev91/Auto-GPT
to the file
config.pylocated within the directoryautogpt\config. Subsequently, is it sufficient to includeOPENAI_API_BASE_URL=xxxin the .env file? I am a novice in GitHub, unfamiliar with its usage.Indeed, an you'll have also to set EMBED_DIM according to the model you are using. In the config file you'll find some examples for llama and derivates. Also, you'll have to use my fork if you want to try that. The code has been just proposed, but not yet merged to the main project. https://github.com/DGdev91/Auto-GPT
OK, i will have a try. Thank you.
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.