Sth wrong with using Ollama +qdrant:Vector dimension error: expected dim: 1536, got 768
🐛 Describe the bug
I use ollama embedingmodel and chatmodel,get right response 。
But response form Qdrant:Vector dimension error: expected dim: 1536, got 768
where can I config the para ?
Me,too,after fixing bugs in mem0/llms/ollama.py and correctly make m = Memory.from_config(config),i meet this bug while using result = m.add()
🐛 Describe the bug
![]()
I use ollama embedingmodel and chatmodel,get right response 。 But response form Qdrant:Vector dimension error: expected dim: 1536, got 768 where can I config the para ?
just restart the qdrant Docker can fix this bug ,see #712
Rebooted, but still having the same issue!
Rebooted, but still having the same issue!
try edit mem0/embeddings/ollama.py in line 9
def __init__(self, model="nomic-embed-text"):
self.model = model
self._ensure_model_exists()
self.dims = 768 #this is mine
```
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"host": "localhost",
"port": 6333,
}
},
have you changed this config?
nope
Is there anything else that needs to be set up?
Is there anything else that needs to be set up?
I modified a very small portion of the source code based on the error message, but the fact is that mem0 currently does not support local ollama operation
Is there anything else that needs to be set up?
I modified a very small portion of the source code based on the error message, but the fact is that mem0 currently does not support local ollama operation
OK Then I won't take the time to make it happen. I used openai first, and the effect was not bad!
Could you offer you email or wechat account, We can discuss it together
just create a new 'collection_name' , by default, the 'collection_name' is 'mem0' (see : class MemoryItem ) , dims is 1536 . different config ,different collection_name :)
The docs for the embedding model nomic-embed-text:latest shows it's dims are 768. So config the vector_store's embedding_model_dims to 768.
"vector_store": {
"provider": "qdrant",
"config": {
"collection_name": "test",
"host": "localhost",
"port": 6333,
"embedding_model_dims": 768, # Change this according to your local model's dimensions
},
},
If you have already created the collection, which you have by the time you got this error, then either delete the collection in the Qdrant UI or change the collection_name so it will make a new one during Memory.add
Closing as fixed
getting same issue with typescript sdk :
const config = {
llm: {
provider: "groq",
config: {
model: "llama-3.1-8b-instant",
temperature: 0.1,
max_tokens: 1000,
},
},
embedder: {
provider: "google",
config: {
apiKey: process.env.GEMINI_API_KEY!,
model: "gemini-embedding-001",
embeddingDims: 768,
embedding_dims: 768,
embedding_model_dims: 768,
},
},
vectorStore: {
provider: "qdrant",
config: {
collectionName: "abhi-new-test",
embeddingModelDims: 768,
host: "localhost",
port: 6333,
},
},
version: "v1.1",
};
tried whole day figuring this out !!
throw new fun.Error(err);
^
ApiError: Bad Request
at Object.fun [as searchPoints] (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/@[email protected]/node_modules/@qdrant/openapi-typescript-fetch/dist/esm/fetcher.js:169:23)
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at async QdrantClient.search (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/@[email protected][email protected]/node_modules/@qdrant/js-client-rest/dist/esm/qdrant-client.js:167:26)
at async Qdrant.search (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/[email protected]_@[email protected][email protected]__@[email protected]_73eb170a988c6a467084e35c8e96c8d9/node_modules/mem0ai/src/oss/src/vector_stores/qdrant.ts:124:21)
at async _Memory.addToVectorStore (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/[email protected]_@[email protected][email protected]__@[email protected]_73eb170a988c6a467084e35c8e96c8d9/node_modules/mem0ai/src/oss/src/memory/index.ts:278:32)
at async _Memory.add (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/[email protected]_@[email protected][email protected]__@[email protected]_73eb170a988c6a467084e35c8e96c8d9/node_modules/mem0ai/src/oss/src/memory/index.ts:191:31)
at async file:///home/abhi/abhi-projects/genAI/src/concepts/langraph/mem0.ts:59:1 {
headers: Headers {},
url: 'http://localhost:6333/collections/abhi-new-test/points/search',
status: 400,
statusText: 'Bad Request',
data: {
status: {
error: 'Wrong input: Vector dimension error: expected dim: 1536, got 768'
},
time: 0.000357633
}
}
I use ollama embedingmodel and chatmodel,get right response 。 But response form Qdrant:Vector dimension error: expected dim: 1536, got 768 where can I config the para ?