Graph store with Anthropic LLM and Ollama embedder not working
🐛 Describe the bug
I have this config and basic code to add a memory:
from mem0 import Memory
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"collection_name": "test",
"host": "localhost",
"port": 6333,
"embedding_model_dims": 768,
},
},
"llm": {
"provider": "anthropic",
"config": {
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.1,
"max_tokens": 8192,
},
},
"graph_store": {
"provider": "neo4j",
"config": {
"url": "neo4j://localhost:7687",
"username": "neo4j",
"password": "password",
"embedding_model_dims": 768,
},
},
"embedder": {
"provider": "ollama",
"config": {
"model": "nomic-embed-text:latest",
"ollama_base_url": "http://localhost:11434",
},
},
"version": "v1.1"
}
m = Memory.from_config(config)
res = m.add("I am working on improving my photography skills. Suggest some online courses.", user_id="john")
print(res)
The code above throws an error on the m.add statement.
anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'tool_choice: Input should be a valid dictionary or object to extract fields from'}}
It works well without the graph_store key in config.
I've been seeing examples where they only have graph_store in their config but I'm having issues with that as well and this is the error I'm getting.
config = {
# "vector_store": {
# "provider": "qdrant",
# "config": {
# "collection_name": "test",
# "host": "localhost",
# "port": 6333,
# "embedding_model_dims": 768,
# },
# },
"llm": {
"provider": "anthropic",
"config": {
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.1,
"max_tokens": 8192,
},
},
"graph_store": {
"provider": "neo4j",
"config": {
"url": "neo4j://localhost:7687",
"username": "neo4j",
"password": "password",
"embedding_model_dims": 768,
},
},
"embedder": {
"provider": "ollama",
"config": {
"model": "nomic-embed-text:latest",
"ollama_base_url": "http://localhost:11434",
},
},
"version": "v1.1"
}
ValueError: shapes (0,1536) and (768,) not aligned: 1536 (dim 1) != 768 (dim 0)
Versions
- Mem0 = 0.1.36
- Neo4j = 4.1.13
- Anthropic (PIP) = 0.42.0
I am also experiencing this issue.
It looks like when specifying a graph store the solution is not handling the request properly for tools and tool_choice for Anthropic
A reference to the mem0 code for constructing a request to Anthropic:
def generate_response(
self,
messages: List[Dict[str, str]],
response_format=None,
tools: Optional[List[Dict]] = None,
tool_choice: Dict[str, str] = {"type": "auto"},
):
"""
Generate a response based on the given messages using Anthropic.
Args:
messages (list): List of message dicts containing 'role' and 'content'.
response_format (str or object, optional): Format of the response. Defaults to "text".
tools (list, optional): List of tools that the model can call. Defaults to None.
tool_choice (str, optional): Tool choice method. Defaults to "auto".
Returns:
str: The generated response.
"""
# Separate system message from other messages
system_message = ""
filtered_messages = []
for message in messages:
if message["role"] == "system":
system_message = message["content"]
else:
filtered_messages.append(message)
params = {
"model": self.config.model,
"messages": filtered_messages,
"system": system_message,
"temperature": self.config.temperature,
"max_tokens": self.config.max_tokens,
"top_p": self.config.top_p,
}
if tools: # TODO: Remove tools if no issues found with new memory addition logic
params["tools"] = tools
params["tool_choice"] = tool_choice
response = self.client.messages.create(**params)
return response.content[0].text
I am still reviewing the code but it looks like tool_choice: str = "auto", is not defaulted correctly and I am not sure if the format of the tools provided aligns with what Anthropic expects
What is defined in the mem0 code
EXTRACT_ENTITIES_TOOL = {
"type": "function",
"function": {
"name": "extract_entities",
"description": "Extract entities and their types from the text.",
"parameters": {
"type": "object",
"properties": {
"entities": {
"type": "array",
"items": {
"type": "object",
"properties": {
"entity": {
"type": "string",
"description": "The name or identifier of the entity."
},
"entity_type": {
"type": "string",
"description": "The type or category of the entity."
}
},
"required": ["entity", "entity_type"],
"additionalProperties": False
},
"description": "An array of entities with their types."
}
},
"required": ["entities"],
"additionalProperties": False
}
}
}
What is defined in the Anthropic documentation https://docs.anthropic.com/en/docs/build-with-claude/tool-use#example-simple-tool-definition
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either 'celsius' or 'fahrenheit'"
}
},
"required": ["location"]
}
}
llm:
provider: "litellm"
config:
model: "claude-3-7-sonnet-latest"
temperature: 0.6
you can circumvent the issue by going through the litellm abstraction layer. This one is properly implemented. Hope it helps someone.
@acgonzales You're trying to add string to the memory but the expected format is dict. Something like this:
{ "role": "user", "content": "Who is the Prime Minister of India?" }