AutoGPT
AutoGPT copied to clipboard
Start from exactly the same state by caching Agent and related classes
Duplicates
- [X] I have searched the existing issues
Summary 💡
By making Agent class serializable (and the classes it uses), and using pickle after each command is executed, one can restore the exact same state at startup.
Motivation 🔦
The current situation is far from ideal. Often when you run it after you quit a run, even if memory is configured correctly (i.e. redis), it seems to search the same things ~ This is because ultimately it starts from a new state. Also, in agent.py:
self.summary_memory = (
"I was created." # Initial memory necessary to avoid hallucination
)
Of course, you need to reconnect to redis and similar thing, but those are details..
Examples 🌈
Use pickle to save Agent with pickle.dump() into cache/last_agent.bin afer each command . Ask in start whether to load at startup. Use different flow for loading the agent.
In the future , save multiple agents intocache/ cache_agent_{agent_name}.bin` ,
And ask in the beginning which agent do you want to load today?
One might need to dump redis database in this case.
Save state has been proposed before, but can be difficult to implement. It is definitely going to be needed eventually, but I think the devs still need to work out the rearch.
There are other variables that need to be considered. A lot of what AutoGPT can do depends on the state of the environment it runs on, not just state of the agents. A simple, yet common problem with save-state features in applications are simple file system changes. A lot of the work that agents do reference the local file system, and changes to it while the agent isn't running may lead to unpredictable behavior.
File system changes are just one example. User Security, Networking Changes, API Changes can all drastically affect save-states.
Not to say that this utility doesn't need this, just that it may cause complications, if we only provide save-state mechanisms for the agents.
Thanks for your reply! I don't see it that way. I am talking about loading state after quit (normal). Everything should stay the same. If there are file system changes in the workspace or goals, then arguable the user did them in order for the agent to run differently in the next run. So, as I see it, it is pickle + function to reconnect to redis, and shouldn't be that hard to implement. The second phase of it might be more difficult and requires to handle state in terms of redis+workspace.
The question is if the workspace file system is transient. It is solvable. I keep my container running generally and it is mapped to disk (is it because it is dev-container?) . So that is a suggestion.
A state of exit due to error is mostly transient (not saved into members) but maybe more difficult to handle. It should continue the flow from the last executed command and try to rerun it.
I think I already spend more than 15$ on running the same queries over and over. Not to mention the time.
Makes sense, though I think save-states would make more sense on an environment level.
Especially since the bot could be run from docker. It can also be installed on a VM where (in theory) the only type of transient that could affect AutoGPT's progress is a network transient.
pickle is straightforward as is, so we could just as well tinker with the idea and make some experiments to suspend/resume a single top-level agent and its associated task list and see where that is taking us
@Boostrix So you think there is a merit to implementing this before the redesign of arch?
the re-arch is already ongoing, and I have yet to found a real "planning" or "tasking" system (let alone any proper attempt at persistence, except for the weird file_logger.txt stuff) ... then again, not sure what the team is planning. In general, I would not touch many files due to the re-arch, but as long as it's just some tiny hook to call code that lives in separate files/directories, there's probably no time wasted experimenting with such a scheme
I have tinkered with a scheme to solve the lack of planning and persistence at the same time by treating all tasks as lists of actions that are executed by pushing/popping a stack: https://github.com/Significant-Gravitas/Auto-GPT/issues/3593
Basically, we would recursively call the LLM to split up each task into a list of tasks until it ends up with atomic steps, at which point those are added to a JSON - the JSON would tell the agent at what stage it is (or was). So basically, a list of tasks and an index that tells the agent to continue with a certain task, which in turn refers to other tasks
Base on skimming through the wiki and the re-arch branch, there's apparently nothing in the pipeline regarding persistence or planning:
- https://github.com/Significant-Gravitas/Auto-GPT/tree/re-arch/hello-world (the re-arch branch now also supports running ./run.sh status for details)
- https://github.com/Significant-Gravitas/Auto-GPT/wiki/Architecting
Here are some related PRs:
- #2530
- #2602
Also relevant: #822
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.