Long-Term Memory Not Storing Data in Crew AI Agent
Description
I'm currently working on a project where I'm using Crew AI agents with chunking and context retention. As part of this process, I'm attempting to implement long-term memory for the agents using SQLite to store previous context and embedding information. However, I’ve noticed that the long-term memory is not storing any data.
Steps to Reproduce
- Integrate Crew AI agents with the logic for chunking SQL queries.
- Implement the long-term memory storage using SQLite to retain context and embeddings across chunks.
- Run the agent to process multiple SQL queries.
Expected behavior
The long-term memory should store and retrieve context and embedding data for use in subsequent query processing.
Screenshots/Code snippets
Operating System
Windows 11
Python Version
3.12
crewAI Version
0.41.1
crewAI Tools Version
NA
Virtual Environment
Venv
Evidence
Possible Solution
None
Additional context
None
Can you please provide more info / context on how you are setting this up in your crew I.E the code itself as you have only supplied a screenshot of the underlying crewai code for long term memory
agent1= Agent( role="...", goal=..., allow_delegation=False, verbose=True, memory=True, llm=llm ) agent2= Agent( role=..., goal=... allow_delegation=False, verbose=True, memory=True, tools = [web_tool], llm=llm )
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
memory=True,
process=Process.sequential,
verbose=True,
embedder={
"provider": "google",
"config":{
"model": 'models/embedding-001',
"task_type": "retrieval_document",
"title": "Embeddings for Embedchain"
}
}
)
This is my code where I am creating two agents and a crew. Other than this I have also used the way to store context in long term memory using theapproach as mentioned in the crewai documentation : https://docs.crewai.com/core-concepts/Memory/#how-memory-systems-empower-agents.
It is creating a file long_term_memory.db but the file is empty.
Any update on this? Encountering the same bug.
Also hitting this issue, looking at the code I can only see one place _long_term_memory.save(..) is called (in the agent executor mixin), and there is no reference from anywhere else to it.
I'm also facing something similar. But I'm receiving the error: "Missing attributes for long term memory: 'str' object has no attribute 'quality'".
I'm also hitting this:
# Agent: Team Manager
## Thought: Thought: I need to engage with the Senior customer communicator to ask about the website URL in a professional manner. They will help us get this crucial information from the customer.
## Using tool: Delegate work to coworker
## Tool Input:
"{\"coworker\": \"Senior customer communicator\", \"task\": \"Ask the customer which website they want to generate more content for\", \"context\": \"We need to identify the website URL that the customer wants to focus on for content generation. Please ask them in a professional and friendly manner, and ensure to mention that you'll help format it properly (e.g., if they say 'mybusiness.com', you'll format it as 'https://mybusiness.com'). This is the first step in our content generation project and is crucial for moving forward.\"}"
## Tool Output:
Thank you for providing your website URL! I can confirm that we'll be focusing on https://mybusiness.com/ for your content generation project. I've verified that this URL is properly formatted with the 'https://' protocol, so we're all set to proceed with developing your content strategy. Is there anything specific about your golf challenge website that you'd like me to know as we move forward with content planning?
# Agent: Team Manager
## Final Answer:
https://mybusiness.com/
Missing attributes for long term memory: 'str' object has no attribute 'quality'
Seems related to this file as its the only place that string occurs: https://github.com/crewAIInc/crewAI/blame/5f46ff883632e4b3396c6dfbae98b9347e807f99/src/crewai/agents/agent_builder/base_agent_executor_mixin.py#L97
And the only change made there recently seems to be this PR: https://github.com/crewAIInc/crewAI/pull/1444
Understanding the Issue with Additional Information
Error Message
Missing attributes for long term memory: 'str' object has no attribute 'quality'
This error occurs in the _create_long_term_memory method when it tries to access evaluation.quality, but evaluation is a string instead of an instance of TaskEvaluation.
Relevant Code Sections
-
In
CrewAgentExecutorMixin._create_long_term_memory:def _create_long_term_memory(self, output) -> None: if ( self.crew and self.crew.memory and self.crew._long_term_memory and self.crew._entity_memory and self.task and self.agent ): try: ltm_agent = TaskEvaluator(self.agent) evaluation = ltm_agent.evaluate(self.task, output.text) if isinstance(evaluation, ConverterError): return long_term_memory = LongTermMemoryItem( task=self.task.description, agent=self.agent.role, quality=evaluation.quality, datetime=str(time.time()), expected_output=self.task.expected_output, metadata={ "suggestions": evaluation.suggestions, "quality": evaluation.quality, }, ) self.crew._long_term_memory.save(long_term_memory) -
In
TaskEvaluator.evaluate:class TaskEvaluator: def __init__(self, original_agent): self.llm = original_agent.llm def evaluate(self, task, output) -> TaskEvaluation: evaluation_query = ( f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n" f"Task Description:\n{task.description}\n\n" f"Expected Output:\n{task.expected_output}\n\n" f"Actual Output:\n{output}\n\n" "Please provide:\n" "- Bullet points suggestions to improve future similar tasks\n" "- A score from 0 to 10 evaluating on completion, quality, and overall performance" "- Entities extracted from the task output, if any, their type, description, and relationships" ) instructions = "Convert all responses into valid JSON output." if not self.llm.supports_function_calling(): model_schema = PydanticSchemaParser(model=TaskEvaluation).get_schema() instructions = f"{instructions}\n\nReturn only valid JSON with the following schema:\n```json\n{model_schema}\n```" converter = Converter( llm=self.llm, text=evaluation_query, model=TaskEvaluation, instructions=instructions, ) return converter.to_pydantic()
Root Cause Analysis
-
Converter Returns a String Instead of TaskEvaluation Instance:
- The
converter.to_pydantic()method is expected to return an instance ofTaskEvaluation. - However, under certain circumstances (e.g., when the LLM output cannot be parsed into the Pydantic model), it returns a string error message or the raw LLM output.
- The
-
LLM Output May Not Be in Expected JSON Format:
- The LLM might not be generating the output in the expected JSON format, especially if the prompt or instructions are not clear enough.
- If the LLM fails to produce valid JSON, the converter cannot parse it into the
TaskEvaluationmodel.
-
Lack of Error Handling in
TaskEvaluator.evaluate:- The method
evaluatedoes not handle the case whereconverter.to_pydantic()fails to return the expected type. - There's no check to verify the type of the returned value before using it.
- The method
Steps to Resolve the Issue
1. Modify TaskEvaluator.evaluate to Handle Conversion Errors
Add error handling to check if converter.to_pydantic() returns an instance of TaskEvaluation. If not, handle the error gracefully.
Updated evaluate Method:
class TaskEvaluator:
def __init__(self, original_agent):
self.llm = original_agent.llm
def evaluate(self, task, output) -> TaskEvaluation:
evaluation_query = (
f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n"
f"Task Description:\n{task.description}\n\n"
f"Expected Output:\n{task.expected_output}\n\n"
f"Actual Output:\n{output}\n\n"
"Please provide:\n"
"- Bullet points suggestions to improve future similar tasks\n"
"- A score from 0 to 10 evaluating on completion, quality, and overall performance\n"
"- Entities extracted from the task output, if any, their type, description, and relationships"
)
instructions = "Convert all responses into valid JSON output."
if not self.llm.supports_function_calling():
model_schema = PydanticSchemaParser(model=TaskEvaluation).get_schema()
instructions = f"{instructions}\n\nReturn only valid JSON with the following schema:\n```json\n{model_schema}\n```"
converter = Converter(
llm=self.llm,
text=evaluation_query,
model=TaskEvaluation,
instructions=instructions,
)
# Try to convert to Pydantic and handle any errors
try:
result = converter.to_pydantic()
if not isinstance(result, TaskEvaluation):
raise ValueError("Failed to parse LLM output into TaskEvaluation.")
return result
except Exception as e:
print(f"Converter failed to produce valid TaskEvaluation: {e}")
# Return a default TaskEvaluation object
return TaskEvaluation(
suggestions=["No suggestions available due to evaluation error."],
quality=0.0,
entities=[]
)
Explanation:
- Error Handling: Wrap the conversion in a try-except block to catch any exceptions.
- Type Checking: Verify that
resultis an instance ofTaskEvaluation; if not, raise an exception. - Fallback: If the conversion fails, return a default
TaskEvaluationobject with safe default values.
2. Update Your CrewAgentExecutorMixin._create_long_term_memory Method
Ensure that you check the type of evaluation before accessing its attributes.
Updated Method:
def _create_long_term_memory(self, output) -> None:
if (
self.crew
and self.crew.memory
and self.crew._long_term_memory
and self.crew._entity_memory
and self.task
and self.agent
):
try:
ltm_agent = TaskEvaluator(self.agent)
evaluation = ltm_agent.evaluate(self.task, output.text)
if not isinstance(evaluation, TaskEvaluation):
print(f"Invalid evaluation result: {evaluation}")
return
long_term_memory = LongTermMemoryItem(
task=self.task.description,
agent=self.agent.role,
quality=evaluation.quality,
datetime=str(time.time()),
expected_output=self.task.expected_output,
metadata={
"suggestions": evaluation.suggestions,
"quality": evaluation.quality,
},
)
self.crew._long_term_memory.save(long_term_memory)
for entity in evaluation.entities:
entity_memory = EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join(
f"- {r}" for r in entity.relationships
),
)
self.crew._entity_memory.save(entity_memory)
except AttributeError as e:
print(f"Missing attributes for long term memory: {e}")
pass
except Exception as e:
print(f"Failed to add to long term memory: {e}")
pass
Explanation:
- Type Check: Verify that
evaluationis an instance ofTaskEvaluationbefore accessing its attributes. - Error Message: If the check fails, print a helpful message and exit the method gracefully.
3. Ensure the LLM Generates Valid JSON Output
Issues with LLM Response:
- The LLM may not be generating output in the expected JSON format.
- Especially if the LLM does not support function calling or is not adhering to the instructions.
Actions:
-
Use an LLM That Supports Function Calling:
- If possible, switch to an LLM that supports function calling. This can improve the reliability of the responses.
-
Adjust the Prompt and Instructions:
- Make the instructions more explicit to guide the LLM toward producing valid JSON.
-
Example Adjusted Instructions:
instructions = ( "Please provide the response strictly in valid JSON format, adhering to the following schema:\n" f"{model_schema}\n\n" "Do not include any extra text or explanations outside the JSON." ) -
Include JSON Schema in the Prompt:
- By providing the exact schema, the LLM has a better chance of producing the correct output.
4. Test the Converter Independently
Ensure that the Converter class works as expected with your LLM.
Actions:
-
Run a Test Conversion:
- Manually create a test scenario where you use the
Converterto parse a known LLM response.
- Manually create a test scenario where you use the
-
Verify Error Handling in
Converter.to_pydantic():- Check if
to_pydantic()raises exceptions or returns strings when it fails. - Ensure that it consistently raises exceptions on failure so that you can handle them appropriately.
- Check if
5. Update CrewAI Package
It's possible that this issue has been identified and addressed in a newer version of CrewAI.
Actions:
-
Upgrade CrewAI:
pip install --upgrade crewai -
Check for Known Issues:
- Review the CrewAI GitHub repository for any similar issues.
- See if patches or updates have been made related to long-term memory or evaluation.
6. Verify Agent's LLM Configuration
Ensure that the LLMs used by your agents are correctly configured and compatible.
Actions:
-
Check LLM Compatibility:
- Verify that
claude-3-5-sonnet-20241022supports the needed features. - If not, consider switching to a more compatible model like
gpt-4orgpt-4o.
- Verify that
-
Adjust Temperature and Settings:
- A higher temperature can lead to more random outputs. Since you need structured JSON, set
temperature=0to encourage deterministic outputs.
- A higher temperature can lead to more random outputs. Since you need structured JSON, set
7. Provide Default Evaluation Values
In case all else fails, have a fallback mechanism to provide default values.
Actions:
-
In
evaluateMethod:- If conversion fails, return an instance of
TaskEvaluationwith default or placeholder values.
- If conversion fails, return an instance of
Implementing the Solution
Here's how you can integrate the above steps into your codebase:
- Update
TaskEvaluator.evaluateto Handle Errors Gracefully. - Ensure Type Checks Before Accessing Attributes in
_create_long_term_memory. - Adjust LLM Settings and Prompts for Better Output.
- Test the Changes Thoroughly to Confirm the Issue is Resolved.
Example of Adjusted Code Flow
Within CrewAgentExecutorMixin:
def _create_long_term_memory(self, output) -> None:
# Existing conditions...
try:
ltm_agent = TaskEvaluator(self.agent)
evaluation = ltm_agent.evaluate(self.task, output.text)
# Type check and handle invalid evaluation
if not isinstance(evaluation, TaskEvaluation):
print("Evaluation returned invalid data. Skipping long-term memory update.")
return
# Proceed to use evaluation safely
# ...
except Exception as e:
print(f"Failed to add to long term memory: {e}")
Conclusion
The error arises because the TaskEvaluator.evaluate method is returning a string when it fails to parse the LLM's output into the TaskEvaluation model. By adding proper error handling and type checking, you can ensure that your code gracefully handles such cases, preventing the AttributeError and maintaining the stability of your application.
Key Takeaways:
- Always validate and type-check external inputs, especially when they come from models or services that might fail or produce unexpected output.
- Provide clear and explicit instructions to LLMs to improve the likelihood of receiving the desired output format.
- Implement robust error handling to catch and manage exceptions gracefully, maintaining application stability.
Feel free to ask if you need further clarification or assistance with implementing these solutions!
Tokens: 79k sent, 4.4k received. Cost: $1.45 message, $2.80 session.
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.