AutoGPT
AutoGPT copied to clipboard
We need an autoVicuna, same design as autoGPT, but using Vicuna
Duplicates
- [X] I have searched the existing issues
Summary 💡
That would be simply great instead of using OpenAI
Examples 🌈
No response
Motivation 🔦
No response
Check #local-models in the Discord, there's at least 2 people working on this.
In progress https://github.com/BillSchumacher/Auto-GPT/tree/vicuna
Duplicate of #461 and #143
It doesn't work great, yet.
People vastly underestimate the quality of GPT-4 and the hardness to be competitive with it. But time will show, and FOSS models are useful as helpers anyway.
It doesn't work great, yet.
maybe this model will help: https://huggingface.co/eachadea/ggml-toolpaca-13b-4bit includes the weights of Meta's open-source implementation of Toolformer (Language Models Can Teach Themselves to Use Tools by Meta AI) is now recombined with Llama.
FOSS
Foss?
Are you banned both at google and chatgpt? :-) Free Open Source Software
Are you banned both at google and chatgpt
Tried google but thnx. and: Open Source FTW
Yes, this world needs open source. Especially when talking about autonomous AI.
Fully agree, but currently there's no open source organisation with the amount of capital required to buy/rent that many GPUs to compete with openai/google/etc. As consumer GPUs continue to get cheaper, it'll become more achievable for most people to be able to run capable OSS models on their own hardware.
currently there's no open source organisation with the amount of capital required to buy/rent that many GPUs to compete with openai/google/etc.
Maybe you'd be interested in signing this petition: https://www.openpetition.eu/petition/online/securing-our-digital-future-a-cern-for-open-source-large-scale-ai-research-and-its-safety
This facility, analogous to the CERN project in scale and impact, should house a diverse array of machines equipped with at least 100,000 high-performance state-of-the-art accelerators (GPUs or ASICs), operated by experts from the machine learning and supercomputing research community and overseen by democratically elected institutions in the participating nations.
And how about decentralized GPU? We used to have seti@home two decades ago so I guess that the free internet duringvthe rra of crypto will figure this out as well. Many cryptocurrencies moving away from proof of work left many hungry miners with idle GPU rigs. Team FOSS will win this game!
https://petals.ml/
Run 100B+ language models at home, BitTorrent‑style Run large language models like BLOOM-176B collaboratively — you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning. Single-batch inference runs at ≈ 1 sec per step (token) — up to 10x faster than offloading, enough for chatbots and other interactive apps. Parallel inference reaches hundreds of tokens/sec.
In progress https://github.com/BillSchumacher/Auto-GPT/tree/vicuna
How's it going?
Thanks Bill for the contributions, if you need help with anything let us know
In progress https://github.com/BillSchumacher/Auto-GPT/tree/vicuna
How's it going?
The prompts used with OpenAI don't work the same with Vicuna. So we need to find the right prompts to use with it.
In progress https://github.com/BillSchumacher/Auto-GPT/tree/vicuna
How's it going?
The prompts used with OpenAI don't work the same with Vicuna. So we need to find the right prompts to use with it.
Makes sense... Maybe we can have a file with all the prompts needed for each step, that way we can "easily" tweak the prompts from one place...
I have started testing with some prompts to simulate autoGPT behavior with Vicuna:
> from the list of commands "search internet", "get web contents", "execute", "delete file", "enhance code", "read file", "search file" select the most appropiate for the arguments "get info from www.test.com" and provide your answer in json format { "command", "argument" } only
{ "command": "get web contents", "argument": ["get", "info", "from", "www.test.com"] }
These prompts generate code with Vicuna:
improve this code "int main()" to build an ERP
Write the python code for a neural network example
If you want I can post here the prompts that autoGPT and babyAGI generate, so you can do tests
To see the results, just run the prompt in chatGPT
In progress https://github.com/BillSchumacher/Auto-GPT/tree/vicuna
How's it going?
Pretty good.

An example using the Auto-GPT setup. With my example plugin, lol.
Slightly better output if you use my prompt in https://github.com/BillSchumacher/Auto-GPT/blob/vicuna/scripts/data/prompt.txt

and then with a little more context:

Bill, have you tried to ask it to improve code ?
I have not, I'm going to play with it more tomorrow but I need to go bed =(
This should be able to plugin to Auto-GPT soon.
Koala seems to be a lot less self restricted, but also more polarized as some training on online chat is added. More villain style ideas.
In progress https://github.com/BillSchumacher/Auto-GPT/tree/vicuna
what is the process to use this? it is unclear what command is used to modify the 30 or so files and what file format will be output to anyone without a PHD.
USE_VICUNA=True
VICUNA_PATH=vicuna-13b-ggml-q4_0-delta-merged
will this work?
vicuna-13b-ggml-q4_0-delta-merged>wsl tree
.
└── ggml-model-q4_0.bin
0 directories, 1 file
Bill, can you please leave a tutorial on how to get at least the basic model to work. So we can all help improving ?