AutoGPT icon indicating copy to clipboard operation
AutoGPT copied to clipboard

Display short-term and long-term memory usage

Open Torantulino opened this issue 2 years ago • 13 comments
trafficstars

Auto-GPT currently pins it's Long-Term memory to the start of it's context window. It is able to manage this through commands.

Auto-GPT should be aware of it's short and long term memory usage so that it knows when something is doing to be deleted from it's memory due to context limits. e.g memory usage: (2555/4000 tokens)

This may lead to some interesting behaviour where it is less inclined to read long strings of text, or is more meticulous at saving information to long-term-memory when it sees it's running low on tokens.

Torantulino avatar Mar 29 '23 05:03 Torantulino

From what I was reading, you can take the context window, and compress chunks at the rear into summaries.

claysauruswrecks avatar Mar 29 '23 06:03 claysauruswrecks

Interesting idea! This would expand short-term memory.

Currently Auto-GPT manages it's own "Long-Term Memory" which is "pinned" to the start of the context.

Torantulino avatar Mar 29 '23 07:03 Torantulino

Another approach could be to run history through an embeddings API, save the embeddings to a Vector DB, then do a lookup for relevant memories on each step.

tedspare avatar Mar 30 '23 15:03 tedspare

I've been meaning to look into this. Is it practical to regularly rebuild/add to an embedding?

Forgive my ignorance, I've never used them.

Torantulino avatar Mar 30 '23 15:03 Torantulino

All good! Thanks for your reply. In my (limited) understanding, adding embeddings is no more than adding a row to a DB (but with vector data).

tedspare avatar Apr 02 '23 23:04 tedspare

Another approach could be to run history through an embeddings API, save the embeddings to a Vector DB, then do a lookup for relevant memories on each step.

I really think this is an excellent idea. In fact it might be a huge win. This would basically give you an indefinite context window in effect, in terms of "long term" memory. Of course the discarding of "irrelevant" info in any given call to the model will be imperfect, but I'd bet it'll work pretty well.

I was thinking about this myself this morning and wondered if anybody else already mentioned it. Basically I see it as an "associative memory", much like what we have in our own minds. You could perhaps have the GPT model generate a few orthogonal short summaries of what it just output and responded to (top 5?), store these in the vector db, and then get the most relevant "memories" for subsequent calls based on this same process.

So combine these "N closest" memories with most recent and I think you'll get a very effective long term memory mechanism.

Is there anyone out there that sees problems with this idea or has way to improve upon it? It seems super awesome to me...

jantic avatar Apr 03 '23 14:04 jantic

@Torantulino I'm going to pick this up if it is ok with you. Here is my laundry list:

  1. Store long term memory in pinecone: https://www.pinecone.io/. There are lots of options here, this is just fairly simple and is what babyagi is using: https://github.com/yoheinakajima/babyagi
  2. Pull in n closest memories. Default n to 5, but make it configurable. (Do some experimentation on what seems most useful.)
  3. Make this memory object a class that is optional. Specify the delete and add operations on the current memory dict obj to pinecone operations. I'll try to keep this fairly extensible so we could easily make classes with the same interface for different vector DBs
  4. Add in a pinecone api key in .env.template
  5. Update the readme to tell people to use it.
  6. If no api key is specified tell the user they are using a local memory (The current implementation). Also, support explicit local memory option.

Let me know if there is anything here you'd like me to change. I should have a working version of this by EOD tomorrow EST.

I would hope to then be able to extend this to processing files in large repos too and eventually I want to make this feed into the self improvement pipeline with respect to remembering where relevant local files are to large tasks.

dschonholtz avatar Apr 03 '23 23:04 dschonholtz

I believe it's possible to simply use a key-value store as memory and make it available to Auto-GPT as a tool, letting the model itself decide when and what to read from and write to the memory. Auto-GPT already has code execution implemented, so it has all Python functions available as tools, and this is just one more tool. To make the model aware of the memory tool and good at utilizing it, we would have to finetune it (e.g. using the Toolformer approach; there are two open-source implementations and this is more popular than the official one), and would need to collect some usage data (there isn't any paper or implementation that uses a memory tool yet AFAIK). Finetuning is available for ChatGPT-3.5 but not GPT-4, but I think we'll need to finetune anyway if we want Auto-GPT to create new tools and self-improve; we may also use an open model (many of them have LoRA finetuning implementations), which are be less powerful, but we may expose GPT-4 API to it and train it to use the API as a tool, so the whole system would not be less powerful.

alreadydone avatar Apr 04 '23 05:04 alreadydone

Actually, maybe we can make GPT models aware of the memory tool using the system message without the need of finetuning, since it's just a single simple tool. Something like

You are a language model with limited memory (or context length) so that you'll forget what's said 8,000 tokens (3,000 words?) earlier. However, you now have access to a key-value database that serve as your long-term memory. If you are about to forget something important, you may say <remember "k" "v"> to store it in the database, which you can later recall by saying <recall "k">.

I'm not experienced in prompt engineering so there's definitely room for improvement. Notice that

In general, gpt-3.5-turbo-0301 does not pay strong attention to the system message, and therefore important instructions are often better placed in a user message.

so this should work better with GPT-4 than 3.5. If you have access, please try!

alreadydone avatar Apr 04 '23 06:04 alreadydone

This works. Hard to test this kind of thing concretely. But anecdotally it seems like it is much smarter now. I'm implementing a thing to actually track memory usage, number of memory keys taken up or number of vectors in DB to output between thoughts. Then I'm gonna do another pass with the debugger and assuming it appears to be doing what I think it is doing I'll put it up for review

dschonholtz avatar Apr 04 '23 21:04 dschonholtz

See pull: https://github.com/Torantulino/Auto-GPT/pull/122

dschonholtz avatar Apr 04 '23 22:04 dschonholtz

Is this resolved with the output of --debug?

Pwuts avatar Apr 18 '23 20:04 Pwuts

I'm implementing a thing to actually track memory usage, number of memory keys taken up or number of vectors in DB to output between thoughts. Auto-GPT should be aware of it's short and long term memory usage so that it knows when something is doing to be deleted from it's memory due to context limits.

This would ideally be a part of a "quota"-like system so that sub-agents could be managed by agents higher up in the chain whenever there is a quota/constraint violation (soft/hard), as per #3466

Boostrix avatar May 04 '23 06:05 Boostrix

This issue was closed automatically because it has been stale for 10 days with no activity.

github-actions[bot] avatar Sep 17 '23 01:09 github-actions[bot]