Open Search: mapper_parsing_exception
I was running the Example given in documentation for Open search as Vector Provider
https://docs.mem0.ai/components/vectordbs/dbs/opensearch
m = Memory.from_config(config)
messages = [
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
{"role": "assistant", "content": "How about thriller movies? They can be quite engaging."},
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
m.add(messages, user_id="alice", metadata={"category": "movies"})
Getting following error
Error inserting vector abdffcea-3097-41a5-aaa7-4d8c5716c1a5: RequestError(400, 'mapper_parsing_exception', "failed to parse field [vector_field] of type [knn_vector] in document with id 'pih5KJoBshOKdzGc0ipK'. Preview of field's value: 'null'")
Error processing memory action: {'id': '0', 'text': 'Loves sci-fi movies', 'event': 'ADD', 'old_memory': None}, Error: RequestError(400, 'mapper_parsing_exception', "failed to parse field [vector_field] of type [knn_vector] in document with id 'pih5KJoBshOKdzGc0ipK'. Preview of field's value: 'null'")
&
Error inserting vector 6f602db5-ea9f-4f1d-903e-d8b7fff9661a: RequestError(400, 'mapper_parsing_exception', "failed to parse field [vector_field] of type [knn_vector] in document with id 'Rix5KJoBoEQ83oZD0pvw'. Preview of field's value: 'null'")
Error processing memory action: {'id': '1', 'text': 'Does not like thriller movies', 'event': 'ADD', 'old_memory': None}, Error: RequestError(400, 'mapper_parsing_exception', "failed to parse field [vector_field] of type [knn_vector] in document with id 'Rix5KJoBoEQ83oZD0pvw'. Preview of field's value: 'null'")
It created the mem0 and mem0Migration Indexes though
On Debugging the mem0 library I noticed Embedding Model Vector Dimensions and Index Vectore Dimensions didn't matched.
What could have helped is better error message or logging the entire callstack
Instead of
logger.error(f"Error inserting vector {id_}: {e}")
logger.error(f"Error inserting vector {id_}: {e}", exc_info=True,
stack_info=True)
@miriyald Can you please add the Hacktoberfest label.. I want to work on this issue
Thank You. I don't think I can add labels.
@miriyald the issue is quite deep-rooted in the system — the decoupling between embedders and vector stores can cause dimension mismatches. Currently, there’s no internal validation to catch this, as it’s handled implicitly behind the scenes. For now, I’ve updated the documentation to ensure both the embedder and vector store use the same dimensions, which should temporarily resolve the issue. However, a thorough refactor and stricter validation checks will be needed to prevent such issues in the future.
It would be great if some validations can be in place.
Additionally do you have any plan to use opensearch batch operations (or Equivalant Native Vector DB ) to insert multiple facts?
It would be great if some validations can be in place.
Additionally do you have any plan to use opensearch batch operations (or Equivalant Native Vector DB ) to insert multiple facts?
Regarding Validations , i agree that these things should be abstracted and handled internally or there should be validations. But , i am not sure why it does not exist in first place.
Regarding batch operations , that's not my call. i just contribute to issues :/