AgentGPT
AgentGPT copied to clipboard
Use offline private LLMs (Llama, Alpaca, Vicuna*, etc.)
Offline LLMs + online browsing if available is a use case for private agents.
GPT4All.io has an easy installer and runs on CPU on most PCs.
Vicuna https://vicuna.lmsys.org - GPT-4 with ~90% ChatGPT Quality
No need to worry about spend or API limits
+1
This is exactly what I am looking to do myself as well. However, I'm not sure about the technical details of doing so. Can anyone provide a bit of guidance on how to use AgentGPT to point to a local model, such as GPT4All, or Vicuna? Thanks!
Hey folks and @vbwyrde, how it works currently is that we make API calls to OpenAI via langchain in agent-service.ts
. We want to support both other paid models (#21) and free models. Local models is a bit tricky since there isn't any easy API interface we can call locally. I know there is Window.AI that allows for this, if you want to look into this that would be cool @vbwyrde
Ok I will take a look. I don't have my local machine setup yet, but I am getting it tomorrow. Also, just to be clear, I am quite new to the world of AI. However, I've been a programmer / analyst for 25+ years, so I feel I will be able to get up to speed. Once I make initial progress I will post back here. Thanks for the information. Much appreciated.
I unfortunately have no experience programming but the autogpt crew are using https://github.com/keldenl/gpt-llama.cpp
It changes the Open AI call to a local url instead. Check it out may help with the process
Yes! Thank you @IndrasMirror The relevant issue is here: https://github.com/keldenl/gpt-llama.cpp/issues/2
@asim-shrestha Are you open to help them add support for AgentGPT as well?
This is the way.
hey all – feel free to open a GitHub issue got gpt-llama.cpp and we can track progress there too. i just merged some pretty big changes that pretty much gives full support for autogpt outlined https://github.com/keldenl/gpt-llama.cpp/issues/2#issuecomment-1519287093
will continue working towards auto-gpt but all the work there definitely would help towards getting agent-gpt working too
@vRobM @keldenl amazing! Yeah it would be dope to get this in for AgentGPT
Yes @IndrasMirror well done again!
They've added support for LLaMA on their own. Here's hints of the files/run: https://github.com/Josh-XT/Agent-LLM/issues/24
support switch API , can switch other API with local API hosting, example:https://github.com/josStorer/RWKV-Runner , like RWKV world 3B or 7B is very good start.
https://github.com/reworkd/AgentGPT/blob/aeb0c6eca3415b0ad6757450b87093ca914800ac/.env.example#L26 already done https://github.com/reworkd/AgentGPT/issues/61
great work!
I've been trying to get something like this running for awhile. Excited to try this soon.
waiting for this aswell, will be awesome to have it
Really keen to see this, being reliant and trusting of OpenAI's product is show stopper for quite a few folks.
Hey @sammcj, we are a bit coupled to OpenAI function calls ATM. You can defer to Azure OpenAI which is fully GDPR compliant however.
No plans to support other models without the API right now but happy to take in PRs
@asim-shrestha : function calling is coming: together.ai said recently on discord will implement it in a few weeks, likely end jan/mid feb. (guess) replicate has the lifeboat API oobas pr is about to be merged soon: https://github.com/oobabooga/text-generation-webui/pull/5185 localAI has function calls since a while. Qwen has function calling ready
models w/ FC get more by the day.
one solution to rule them all would be: integrate the litellm library it also can server as API proxy, lodablancer, router
but for that to work, we would need to configure the provider & modelname. so for the moment, I need to setup an API proxy on a custom endpoint and use the 3 modelnames you provide as alias for other endpoints/providers/models…