AutoGPT icon indicating copy to clipboard operation
AutoGPT copied to clipboard

Auto-GPT Recursive Self Improvement

Open Torantulino opened this issue 1 year ago • 269 comments

Idea 💡

The ULTIMATE achievement for this project would be if Auto-GPT was able to recursively improve itself. That, after-all, is how AGI is predicted by many to come about.

Suggestion 👩‍💻

Auto-GPT should be able to:

  • [ ] Read it's own code
  • [ ] Evaluate it's limitations and areas for improvement
  • [ ] Write code to increase it's abilities
  • [ ] Write tests for it's code and carry out those tests

Further down the line: 📃

  • [ ] Browse it's own code on GitHub
  • [ ] Evaluate, find bugs, etc
  • [ ] Submit pull requests

Where to start? 🤔

I have previously had success with this system prompt in playground: image

Prompt

You are AGI_Builder_GPT. Your goal is to write code and make actionable suggestions in order to improve an AI called "Auto-GPT", so as to broaden the range of tasks it's capable of carrying out.

Torantulino avatar Apr 01 '23 04:04 Torantulino

Perhaps this is what we're looking for #11?

Anyone want to weigh in?

Torantulino avatar Apr 01 '23 04:04 Torantulino

Here's what GPT4 has to say about this issue:

Based on the Github issue posted, I understand that the ultimate goal is to have Auto-GPT recursively improve itself. In order to achieve this, we can start by implementing basic features such as reading its own code, evaluating limitations and areas of improvement, writing code to increase its abilities, and testing its own code.

Here's a plan to implement these functionalities:

  1. [X] Read its own code: You have already implemented basic file operations like read_file. Auto-GPT can use this function to read its own code.

  2. Evaluate its limitations and areas for improvement:

  • [x] Implement a function evaluate_code that analyzes code in terms of code optimization, redundant code, and missing features.
  1. Write code to increase its abilities:
  • [x] Implement a function improve_code that takes suggestions from the evaluate_code function and generates new code based on those suggestions.
  1. Write tests for its code and carry out those tests:
  • [x] Implement a function write_tests that creates test cases for the existing code.
  • [x] Implement a function run_tests that runs the generated test cases and checks if the code passes those test cases.

I will now provide an outline for implementing these functions in Auto-GPT's code:

### Evaluating code

def evaluate_code(code: str) -> List[str]:
    # Analyze the given code and return a list of suggestions for improvements
    pass


### Improving code

def improve_code(suggestions: List[str]) -> str:
    # Generate new code based on the suggestions provided
    pass


### Writing tests

def write_tests(focus: Optional[str] = None) -> str:
    # Generate test cases for the existing code, focusing on specific areas if required
    pass


### Running tests

def run_tests(test_code: str) -> str:
    # Execute the test cases and return the test results
    pass

These functions can be integrated into the execute_command function by adding new command cases for each of them:

# In execute_command()
elif command_name == "evaluate_code":
    return evaluate_code(arguments["code"])
elif command_name == "improve_code":
    return improve_code(arguments["suggestions"])
elif command_name == "write_tests":
    return write_tests(arguments.get("focus"))
elif command_name == "run_tests":
    return run_tests(arguments["test_code"])

Once these functionalities are implemented, Auto-GPT will be one step closer to recursively improving itself. With further improvements, Auto-GPT could potentially browse its own code on GitHub, evaluate it, find bugs, and submit pull requests.

Torantulino avatar Apr 01 '23 05:04 Torantulino

Those empty functions remind me of AI functions :)

By the way, I'm very excited to see many of my thoughts being implemenented here.

alreadydone avatar Apr 01 '23 05:04 alreadydone

Ah yes! That AI Functions guide you linked is exactly how I was thinking of implementing those, if I do it that way.

Torantulino avatar Apr 01 '23 05:04 Torantulino

@alreadydone I love this, thanks for the suggestion!

image

Torantulino avatar Apr 01 '23 05:04 Torantulino

I'm working on this problem on a separate experiment. Would love to chat if you're interested - at the moment, I'm working with genetic algorithms to understand which variant/mutant of the code is more performant - there are a lot local maximums depending on how you set it up.

marcelosousa avatar Apr 02 '23 19:04 marcelosousa

This is a really cool idea! Do you think you could make the AIs logs public as it self improves? Either in the repo or elsewhere. I would be very interested in seeing how it plans and evolves.

PolarBean avatar Apr 02 '23 21:04 PolarBean

So this is it huh... The singularity begins in a GitHub thread

MrSaad avatar Apr 03 '23 04:04 MrSaad

How about trying to drive the self improvement by utilizing test driven development (TDD).

In a recent paper they showcased how greatly GPT-4 can improve it's results by reflecting on its own mistakes.

So the idea is to have it:

  1. start with a single test that fails
  2. make the test pass (iterate X times until successful - self reflecting on the code)
  3. optimize the code by refactoring if necessary, but not allowed to modify or fail the existing tests
  4. write the next small incremental test that moves towards the given requirements

What do you think?

This could also be used for any kind of code generation.

Silthus avatar Apr 04 '23 14:04 Silthus

This really should take a research driven approach. We would need a metric to base the "improvement" on. I'd focus on making a framework, and then let people in their branches use this framework for research. Proven research gets merged in.

ryanpeach avatar Apr 04 '23 15:04 ryanpeach

I wrote an improvement that speeds up the bot significantly. If you write a function called alwaysNo that just returns "n" and then you use that as input, it just exits the program super fast! No money spent!

patruff avatar Apr 04 '23 19:04 patruff

I wrote an improvement that speeds up the bot significantly. If you write a function called alwaysNo that just returns "n" and then you use that as input, it just exits the program super fast! No money spent!

That would have saved me some money 😂. Just kidding, auto-gpt has been very helpful to understand how to compose bigger programs, compared to langchain which confused tf out of me. Thank you Torantulino and everyone who has contributed.

taosx avatar Apr 04 '23 21:04 taosx

"Ok, AG is really great, but I still have no idea how to:

Give him access (with restrictions) to install new modules from GitHub. Give him the use of APIs for applications such as Office, AutoCAD, Revit, and Inventor. Supervise his work (kind of reporting in a file). Give him restrictions (highly recommended restriction/config/settings file), including the use of tokens and external connections that can provide financial risk (debt)."

LeNowak avatar Apr 05 '23 12:04 LeNowak

I was attempting to get it to self-implement code and it seems to have issues with the ai functions, must be because I'm using gpt 3.5. It struggles to parse the response from those types of messages when evaluating code.

rkeshwani avatar Apr 05 '23 17:04 rkeshwani

We'll need to run benchmarks in github action to validate it's not "loosing" capability at every pull request. the benchmark has to use the same version of GPT every time and has to test the whole spectrum of what autogpt can do:

  • write text
  • browse the internet
  • execute commands
  • etc, etc...

The challenge is engineering these tests, because they have to give us a score that we can compare with the current version. Also it might need to be ran multiple times. Because gpt is not totally deterministic. It might cost a lot of tokens too to test the behavior(caching will be helpful here)

One idea to test browsing the internet is to create static content, a fake internet where the search results are deterministic.

Also there are things very hard to measure, like producing art for example. And even if you can measure it, we might encounter a case where a task was performed slightly better but using significantly more tokens. It might be hard to decide whether things improved. Basically this whole github issue revolves around tests.

waynehamadi avatar Apr 06 '23 01:04 waynehamadi

@Torantulino https://github.com/sahil280114/codealpaca

pmb2 avatar Apr 06 '23 03:04 pmb2

i like your work Torantulino i think you should keep doing your own ideas instead of letting others decide for you cause you're a smart man i think you got this

kairie1 avatar Apr 06 '23 10:04 kairie1

Loving your work.

Can you imagine the next level to this? An environment that allows modular setup of any number of "task doers", "reviewers" and container types.

A user could basically create their own system/org chart to solve a specific type of problem

The system could even report back to the user for outside input at set intervals.

Basilthedog89 avatar Apr 07 '23 10:04 Basilthedog89

A cool extension to this idea would be having autogpt spin up an instance of its self like every couple hours, crawl all the current prs, and build a sandboxed version of itself with each new pr merged. Then it could determine either through some combination of unit tests, benchmarking and it evaluating its own code quality, whether this pr was anything beneficial. This could unclog the massive amounts of prs being made and hopefully only let the good ideas shine through. Some problems I see though are people trying to inject malicious code, however, if adequately sandboxed this may not be an issue.

Jordan-Mesches avatar Apr 07 '23 15:04 Jordan-Mesches

Jordan: this might be useful https://github.com/lesfurets/git-octopus. I think nowadays we are doing a lot with LLMs imprecisely with tools that do it much more efficiently.

On Fri, 7 Apr 2023 at 16:43, Jordan-Mesches @.***> wrote:

A cool extension to this idea would be having autogpt spin up an instance of its self like every couple hours, crawl all the current prs, and build a sandboxed version of itself with each new pr merged. Then it could determine either through some combination of unit tests, benchmarking and it evaluating its own code quality, whether this pr was anything beneficial. This could unclog the massive amounts of prs being made and hopefully only let the good ideas shine through. Some problems I see though are people trying to inject malicious code, however, if adequately sandboxed this may not be an issue.

— Reply to this email directly, view it on GitHub https://github.com/Torantulino/Auto-GPT/issues/15#issuecomment-1500400666, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAES6GQRKXE4YB5QO7OUJWTXAAYYPANCNFSM6AAAAAAWPMGBJA . You are receiving this because you commented.Message ID: @.***>

-- Kind Regards, Marcelo Sousa

marcelosousa avatar Apr 07 '23 16:04 marcelosousa

@marcelosousa "I think nowadays we are doing a lot with LLMs imprecisely with tools that do it much more efficiently" - you're definitely correct with that statement. However, the usage I meant was not just simply merging all prs, but having autogpt evaluate all of its current prs individually and automatically determine which ones are worth the maintainers time. And to extend that, maybe if autogpt finds a pr promising, but still lacking in some ways, it could comment on a pr with a suggested list of changes. The whole point of this being to alleviate pressure on the maintainers.

Jordan-Mesches avatar Apr 07 '23 16:04 Jordan-Mesches

I would love to talk with maintainers. We have been creating Reviewpad to help with that. You can specify multiple PR automations and policies and on top of that you can use GPT-4 to summarize and very soon automatically approve PRs.

On Fri, 7 Apr 2023 at 17:41, Jordan-Mesches @.***> wrote:

@marcelosousa https://github.com/marcelosousa "I think nowadays we are doing a lot with LLMs imprecisely with tools that do it much more efficiently" - you're definitely correct with that statement. However, the usage I meant was not just simply merging all prs, but having autogpt evaluate all of its current prs individually and automatically determine which ones are worth the maintainers time. And to extend that, maybe if autogpt finds a pr promising, but still lacking in some ways, it could comment on a pr with a suggested list of changes. The whole point of this being to alleviate pressure on the maintainers.

— Reply to this email directly, view it on GitHub https://github.com/Torantulino/Auto-GPT/issues/15#issuecomment-1500449651, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAES6GTDQC3N5SQR2BWRRN3XAA7TJANCNFSM6AAAAAAWPMGBJA . You are receiving this because you were mentioned.Message ID: @.***>

-- Kind Regards, Marcelo Sousa

marcelosousa avatar Apr 07 '23 17:04 marcelosousa

you now my aunt owns her own business and im her designer but this is way more easy then working for my aunt

kairie1 avatar Apr 07 '23 18:04 kairie1

you now my aunt owns her own business and im her designer but this is way more easy then working for my aunt

Fam be like that.

supermario-ai avatar Apr 08 '23 00:04 supermario-ai

Very good stuff y'all! I'm excited to implement in my personal version tomorrow.

I'll add my take and ideas in the next couple of days as I have time or the pace necessitates.

TemporalLabsLLC-SOL avatar Apr 08 '23 06:04 TemporalLabsLLC-SOL

A thought on how to implement this would be building a python class or set of functions that can open PRs. Using pygithub the script could pull all prs and iterates though them. A place to start could be a set of tests that must pass else the bot reviews and comments on the pr and closes it. If the pr is only a comment or test, the bot can try to build a python script to satisfy the given functionality. The functionality of having tests pass would help avoid the ai spinning cycles on poor code. I think having the bot work off of a directive submission process, something like "create a python script that calls llama and returns the result in Json" would really kick off the process. Plus crowd source ideas. A 'suggested ideas' section of only text or markdown may be an option. Or we could utilize GitHub issues, pull all issues, look for a specific format. CREATE: some idea. The script would need a key to submit prs, anyone can do that. But to read, review and merge prs we would need a key from someone with write access. It could merge to a dev branch so it doesn't break stuff too bad.

coleyr avatar Apr 08 '23 11:04 coleyr

My intuition is that any poor performance here could be improved by multiple rounds of templated prompts for each step, perhaps even recursively. This introduces a new question which is how many rounds to do for every step of thinking, and whether sufficiently good prompts (or templates/with scoring criteria) can be either ab initio created by simple queries or can be created by clever hu-man prompt writing to get faster progress.

kanzure avatar Apr 08 '23 11:04 kanzure

fear it.....run from it....... Destiny arrives all the same image-6.png

lunarflu avatar Apr 08 '23 13:04 lunarflu

does anyone have suggestions of self improvement books the ai could read like self help books and stuff?

(edit: this was a joke)

lahwran avatar Apr 08 '23 17:04 lahwran