AutoGPT
AutoGPT copied to clipboard
Support the GPTCache memory backend
Background
Over the past two days, I've been studying Auto GPT and find the idea very innovative, which excites me. I created a project called GPTCache, which I think would be a suitable memory module for the current project. Based on what I understand, the current memory module primarily relies on the ten most recent message texts. After conducting an embedding operation, a vector search is performed to obtain context information. This enables the OpenAI ChatComplete interface to have context from previous messages and obtain information for the next task. By integrating GPTCache, the process of obtaining context information can be even more customizable. GPTCache offers the following capabilities:
- The embedding module in GPTCache supports multiple embedding implementations.
- The store module in GPTCache has a useful extension method for vector storage, allowing for better data management, such as eviction and separate storage of scalar and vector data.
- By setting a threshold, the similarity evaluation module in GPTCache can obtain more relevant context information. Setting the threshold to 0 is consistent with other backend methods of the current memory.
Fully utilizing GPTCache would require additional development time, but its capabilities could significantly enhance the functionality and performance of the current memory module. Therefore, I believe it's worth it.
Changes
Support the GPTCache memory backend
Documentation
Test Plan
like the TestLocalCache, only change the creating cache method.
# TestLocalCache
self.cache = LocalCache(cfg)
# TestGPTCache
self.cache = GPTCacheMemory(cfg)
PR Quality Checklist
- [x] My pull request is atomic and focuses on a single change.
- [x] I have thoroughly tested my changes with multiple different prompts.
- [x] I have considered potential risks and mitigations for my changes.
- [x] I have documented my changes clearly and comprehensively.
- [x] I have not snuck in any "extra" small tweaks changes
@richbeales @p-i-
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
@Torantulino help me check it, thank you 🤗
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| docs | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Apr 28, 2023 6:11am |
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
This is a mass message from the AutoGPT core team. Our apologies for the ongoing delay in processing PRs. This is because we are re-architecting the AutoGPT core!
For more details (and for infor on joining our Discord), please refer to: https://github.com/Significant-Gravitas/Auto-GPT/wiki/Architecting
@SimFG I wouldn't get too discouraged - the Re-Arch is coming along pretty fast. We are revamping memory with the re-arch, so once that is done, you should be able to continue work
@anonhostpi I will continue to pay attention, the reason for closing this PR is because this PR maybe ineffective after the Re-Arch. Of course, there may be a delay for me when the refactoring is completed, and I am very much looking forward to receiving this news.