llama_index
llama_index copied to clipboard
Adds Unit and Integration tests for MongoDBAtlasVectorSearch
Description
This PR adds tests of the MongoDB vector store MongoDBAtlasVectorSearch
. We have already begun to use these tests in our own continuous integration. This will allow us to maintain the package as llama-index main branch changes.
New Package?
NO.
Did I fill in the tool.llamahub
section in the pyproject.toml
and provide a detailed README.md for my new integration or package?
YES
A small change to pyproject.toml for the mongodb package has been added to permit testing.
Version Bump?
Did I bump the version in the pyproject.toml
file of the package I am updating? (Except for the llama-index-core
package)
Yes. Micro version changed. No API changed, as this was just tests, additional dev dependency added.
Type of Change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [X] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [X] Added new unit/integration tests
- [ ] Added new notebook (that tests end-to-end)
- [ ] I stared at the code and made sure it makes sense
Suggested Checklist:
- [X] I have performed a self-review of my own code
- [X] I have commented my code, particularly in hard-to-understand areas
- [X] I have made corresponding changes to the documentation
- [X] I have added tests that prove my fix is effective or that my feature works
- [X] New and existing unit tests pass locally with my changes
- [X] I ran
make format; make lint
to appease the lint gods
Hi @logan-markewich Is there anything further that you'd like on this?
@caseyclements I'd like to merge, but I think we still need to account for skipping tests properly that rely on API keys, since those keys are not present in CICD
@logan-markewich I've updated to skip tests. Would you please approve the workflows to check whether this fixes all remaining failures?
This means that the tests stay in this repo, but the full Atlas integration tests are actually run in our CI. I'm happy with that. To install an Atlas local deployment for each workflow might be a big burden on your CI. Curiously, what do you do to test OpenAIEmbedding?
@caseyclements tbh the tests on openai are lacking. Ideally the API response from the underlying openai client is mocked.
@logan-markewich What was the intention of commit 1f22c8a? The tests now fail because only the sample text contains only a single node.
from llama_index.core.schema import Document
from llama_index.core.ingestion.pipeline import run_transformations
from llama_index.core.node_parser import SentenceSplitter
len(run_transformations([Document.example()], [SentenceSplitter()])) == 1
@caseyclements pants is annoying as hell to configure to rely on external files like text files.
Easier to just use something in memory
@logan-markewich Suppose we create an example that creates a VectorStoreIndex
with more than a single node. What arguments can we pass to split each line (\n) into a separate node?
@logan-markewich I just updated type hinting for backward compatibility to python 3.8. Would you please run the workflows?
Took the time to fix the integration tests, and inserted many docs by splitting by newline. Should be good to go