AutoGPT icon indicating copy to clipboard operation
AutoGPT copied to clipboard

Add Recency and Importance for Memory Retrieval

Open younghuman opened this issue 2 years ago • 1 comments

Duplicates

  • [X] I have searched the existing issues

Summary 💡

From this paper: https://arxiv.org/abs/2304.03442, it makes sense to consider the importance and recency of the memory when retrieving, not only the semantic relevance as implemented today.

If we treat AutoGPT as a functional human-like agent, this makes sense as the very old memories and trivial memories should be discounted when retrieved.

The formula in the paper is a heuristics: Retrival_score = recency * importance * relevance.

If we add this into the Roadmap I can help with the implementation.

Examples 🌈

No response

Motivation 🔦

No response

younghuman avatar Apr 22 '23 08:04 younghuman

The formula in the paper is a heuristics: Retrival_score = recency * importance * relevance. If we add this into the Roadmap I can help with the implementation.

Indeed, that's interesting.

I was thinking about using a subset of this idea for an MRU/LRU list of commands, i.e. specifically in the context of executing commands - but with a focus on maintaining a history of previous commands, and differentiating between those that worked/didn't work. To hopefully come up with a list of tailored/relevant command candidates.

With ideas like #3686 (that may potentially add a ton of commands), it seems even more important to rethink commands and how the system should provide options to the LLM, with a focus in progressing with its objectives.

This command buffer could be extended by also providing a contextual history for each command. That way, a much more specific list of command candidates could be provided depending on the context, with the option to retrieve/customize a command that was previously executed: https://github.com/Significant-Gravitas/Auto-GPT/issues/2987#issuecomment-1531131136

Maybe, that would be a good starting point (testbed) to tinker with the idea, what do you think ?

Boostrix avatar May 03 '23 14:05 Boostrix

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

github-actions[bot] avatar Sep 06 '23 21:09 github-actions[bot]

This issue was closed automatically because it has been stale for 10 days with no activity.

github-actions[bot] avatar Sep 17 '23 01:09 github-actions[bot]

Duplicates

  • [X] I have searched the existing issues

Summary 💡

From this paper: https://arxiv.org/abs/2304.03442, it makes sense to consider the importance and recency of the memory when retrieving, not only the semantic relevance as implemented today.

If we treat AutoGPT as a functional human-like agent, this makes sense as the very old memories and trivial memories should be discounted when retrieved.

The formula in the paper is a heuristics: Retrival_score = recency * importance * relevance.

If we add this into the Roadmap I can help with the implementation.

Examples 🌈

No response

Motivation 🔦

No response

rogerssam avatar Mar 16 '24 03:03 rogerssam