letta
letta copied to clipboard
feat: Adding support for Ollama local api embeddings
Please describe the purpose of this pull request. See title.
How to test poetry run pytest -s tests/test_endpoints.py
Have you tested this PR? Yes, but it does not work yet
Related issues or PRs Addresses #1369
Is your PR over 500 lines of code? No.
Additional context
Current issue: Unable to connect to http://localhost:11434
when running indicated test.
This issue occured both when using LLamaIndex's interface for creating an Ollama EmbeddingModel object, and when creating a vanilla EmbeddingModel object without LLamaIndex. Do I need to have something running at http://localhost:11434
first?
IMPORTANT
The test fails when run on a file I created called ollama.json
. It is containted (locally) within configs/embedding_model_configs/
, however this change did not get commited (and therefore pushed) for some reason.
Here is the contents of ollama.json
{
"embedding_endpoint_type": "ollama",
"embedding_endpoint": "http://localhost:11434",
"embedding_model": "mixedbread-ai/mxbai-embed-large-v1",
"embedding_dim": 512,
"embedding_chunk_size": 200
}
@sarahwooders looks like this is the same issue popping up again with tests failing on contrib PRs
moved to and merged with #1433