AutoGPT
AutoGPT copied to clipboard
Marqo Memory Implementation
Adding Marqo Memory Implementation
Background
This PR introduces an implementation of Marqo for memory along with associated integration tests and documentation.
Marqo is an open source vector store with an inbuilt inference engine, using Marqo as memory for Auto-GPT couples the inference and storage of your embeddings; this removes the need for reliance upon other APIs such as the OpenAI Ada embeddings.
Changes
This PR adds a MarqoMemory class, integration tests, and README documentation. There are also some minor supporting changes to the Auto-GPT Config object, env template and __init__.py for the memory module.
Documentation
All MarqoMemory functions have type hints and docstrings, the README contains information on getting started with Marqo
Test Plan
Integration tests are provided and can be executed as follows: Set up venv and install Marqo's python client:
python -m venv venv
source venv/bin/activate
python -r install requirements.txt
pip install marqo
Run the Marqo docker image:
docker pull marqoai/marqo:latest
docker rm -f marqo
docker run --name marqo -it --privileged -p 8882:8882 --add-host host.docker.internal:host-gateway marqoai/marqo:latest
Run the integration tests:
python3 -m pytest tests/integration/marqo_memory_tests.py
PR Quality Checklist
- [x] My pull request is atomic and focuses on a single change.
- [x] I have thoroughly tested my changes with multiple different prompts.
- [x] I have considered potential risks and mitigations for my changes.
- [x] I have documented my changes clearly and comprehensively.
- [x] I have not snuck in any "extra" small tweaks changes
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.
This is a mass message from the AutoGPT core team. Our apologies for the ongoing delay in processing PRs. This is because we are re-architecting the AutoGPT core!
For more details (and for infor on joining our Discord), please refer to: https://github.com/Significant-Gravitas/Auto-GPT/wiki/Architecting
You may be able to implement this as a plugin after the rearch. See https://github.com/Significant-Gravitas/Auto-GPT/discussions/3856#discussioncomment-5818746
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.
As we work towards a new vector memory integration, we are not merging any extra memory backends to increase our development agility. Once we have a basic implementation that works well, we can look at this again. Closing for now, as it is based on old code and unclear whether it should be a plugin or a core module in the end.
Feel free to shoot a message on discord if you want to discuss further. :)