mem0 icon indicating copy to clipboard operation
mem0 copied to clipboard

Sth wrong with using Ollama +qdrant:Vector dimension error: expected dim: 1536, got 768

Open AI-Beans opened this issue 1 year ago • 11 comments

🐛 Describe the bug

image image I use ollama embedingmodel and chatmodel,get right response 。 But response form Qdrant:Vector dimension error: expected dim: 1536, got 768 where can I config the para ?

AI-Beans avatar Jul 24 '24 09:07 AI-Beans

Me,too,after fixing bugs in mem0/llms/ollama.py and correctly make m = Memory.from_config(config),i meet this bug while using result = m.add()

teatimekon avatar Jul 24 '24 10:07 teatimekon

🐛 Describe the bug

image image I use ollama embedingmodel and chatmodel,get right response 。 But response form Qdrant:Vector dimension error: expected dim: 1536, got 768 where can I config the para ?

just restart the qdrant Docker can fix this bug ,see #712

teatimekon avatar Jul 25 '24 07:07 teatimekon

Rebooted, but still having the same issue!

AI-Beans avatar Jul 26 '24 06:07 AI-Beans

Rebooted, but still having the same issue!

try edit mem0/embeddings/ollama.py in line 9

    def __init__(self, model="nomic-embed-text"):
        self.model = model
        self._ensure_model_exists()
        self.dims = 768  #this is mine
        ```

teatimekon avatar Jul 26 '24 06:07 teatimekon

config = { "vector_store": { "provider": "qdrant", "config": { "host": "localhost", "port": 6333,
} }, have you changed this config?

AI-Beans avatar Jul 26 '24 06:07 AI-Beans

nope

teatimekon avatar Jul 26 '24 06:07 teatimekon

Is there anything else that needs to be set up?

AI-Beans avatar Jul 26 '24 06:07 AI-Beans

Is there anything else that needs to be set up?

I modified a very small portion of the source code based on the error message, but the fact is that mem0 currently does not support local ollama operation

teatimekon avatar Jul 26 '24 06:07 teatimekon

Is there anything else that needs to be set up?

I modified a very small portion of the source code based on the error message, but the fact is that mem0 currently does not support local ollama operation

OK Then I won't take the time to make it happen. I used openai first, and the effect was not bad!

Could you offer you email or wechat account, We can discuss it together

AI-Beans avatar Jul 26 '24 07:07 AI-Beans

just create a new 'collection_name' , by default, the 'collection_name' is 'mem0' (see : class MemoryItem ) , dims is 1536 . different config ,different collection_name :)

yslion avatar Jul 29 '24 03:07 yslion

The docs for the embedding model nomic-embed-text:latest shows it's dims are 768. So config the vector_store's embedding_model_dims to 768.

"vector_store": {
        "provider": "qdrant",
        "config": {
            "collection_name": "test",
            "host": "localhost",
            "port": 6333,
            "embedding_model_dims": 768,  # Change this according to your local model's dimensions
        },
    },

If you have already created the collection, which you have by the time you got this error, then either delete the collection in the Qdrant UI or change the collection_name so it will make a new one during Memory.add

johnwlockwood avatar May 10 '25 19:05 johnwlockwood

Closing as fixed

parshvadaftari avatar Sep 13 '25 23:09 parshvadaftari

getting same issue with typescript sdk :

const config = {
  llm: {
    provider: "groq",
    config: {
      model: "llama-3.1-8b-instant",
      temperature: 0.1,
      max_tokens: 1000,
    },
  },
  embedder: {
    provider: "google",
    config: {
      apiKey: process.env.GEMINI_API_KEY!,
      model: "gemini-embedding-001",
      embeddingDims: 768,
      embedding_dims: 768,
      embedding_model_dims: 768,
    },
  },
  vectorStore: {
    provider: "qdrant",
    config: {
      collectionName: "abhi-new-test",
      embeddingModelDims: 768,
      host: "localhost",
      port: 6333,
    },
  },
  version: "v1.1",
};

tried whole day figuring this out !!


             throw new fun.Error(err);
                      ^
ApiError: Bad Request
    at Object.fun [as searchPoints] (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/@[email protected]/node_modules/@qdrant/openapi-typescript-fetch/dist/esm/fetcher.js:169:23)
    at processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async QdrantClient.search (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/@[email protected][email protected]/node_modules/@qdrant/js-client-rest/dist/esm/qdrant-client.js:167:26)
    at async Qdrant.search (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/[email protected]_@[email protected][email protected]__@[email protected]_73eb170a988c6a467084e35c8e96c8d9/node_modules/mem0ai/src/oss/src/vector_stores/qdrant.ts:124:21)
    at async _Memory.addToVectorStore (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/[email protected]_@[email protected][email protected]__@[email protected]_73eb170a988c6a467084e35c8e96c8d9/node_modules/mem0ai/src/oss/src/memory/index.ts:278:32)
    at async _Memory.add (file:///home/abhi/abhi-projects/genAI/node_modules/.pnpm/[email protected]_@[email protected][email protected]__@[email protected]_73eb170a988c6a467084e35c8e96c8d9/node_modules/mem0ai/src/oss/src/memory/index.ts:191:31)
    at async file:///home/abhi/abhi-projects/genAI/src/concepts/langraph/mem0.ts:59:1 {
  headers: Headers {},
  url: 'http://localhost:6333/collections/abhi-new-test/points/search',
  status: 400,
  statusText: 'Bad Request',
  data: {
    status: {
      error: 'Wrong input: Vector dimension error: expected dim: 1536, got 768'
    },
    time: 0.000357633
  }
}

nerdyabhi avatar Dec 10 '25 18:12 nerdyabhi