Persist Agent History Across Sessions in AutoGen Studio
What happened?
Issue Description:
In AutoGen Studio version 0.4, we’ve observed that agents start each run without access to prior session history. This results in agents behaving as if each session is brand new, with no memory of past interactions.
Use Case:
As a user of AutoGen Studio, I want to ensure that agents can retain context between sessions. In earlier versions, agent memory seemed to persist across sessions in a more integrated way, which allowed for more natural and continuous interactions.
Current Behavior:
- Each new run in AutoGen Studio discards prior history.
- Agents begin conversations from scratch, with no embedded memory or context.
Expected Behavior:
- Agents should be able to recall prior interactions, either through built-in memory persistence or by easily embedding history at the start of each session.
Questions:
- Is there an intended way to embed previous conversation history in each new session?
- Are there recommended patterns for persisting and restoring history in a seamless way?
I initially thought that RAG might work but then you would lack context, since the entire history would not be present. Chunks of related information would only be available from session to session.
Would something like the pseudo-code below work and if so, how could this be made mainstream. I'm struggling to obtain value in a one-turn session, rather than a continuation or ongoing conversation.
from autogen_agentchat.agents import AssistantAgent
from autogen_core.memory import ListMemory, MemoryContent, MemoryMimeType
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Initialize memory
user_memory = ListMemory()
# Add previous conversation content to memory
await user_memory.add(MemoryContent(content="Previous conversation content here", mime_type=MemoryMimeType.TEXT))
# Configure the assistant agent with memory
assistant_agent = AssistantAgent(
name="assistant_agent",
model_client=OpenAIChatCompletionClient(model="gpt-4o-2024-08-06"),
memory=[user_memory],
)
Environment:
- AutoGen Studio version: 0.4.x
- Platform: Python / AutoGen SDK
- Deployment: Local and experimental workflow prototyping
Which packages was the bug in?
AutoGen Studio (autogensudio)
AutoGen library version.
Studio 0.4.1
Other library version.
No response
Model used
No response
Model provider
None
Other model provider
No response
Python version
3.11
.NET version
None
Operating system
Ubuntu
Hi @MrEdwards007 ,
Thanks for the issue
Is there an intended way to embed previous conversation history in each new session? Are there recommended patterns for persisting and restoring history in a seamless way?
There are a few ways to do this.
- [Preferred approach] Load / Save State We can add explicit state persistance using save/load state api for teams. E.g, once a run in a session is completed (or has an error), we save the state; and then load it prior to any future runs.
- Memory We can also include the concept of Memory in AGS. Here we would need to add an interface to show memory instances, and establish a protocol for how/what is added to this memory. For example, after a run, we can use an LLM to summarize the run and then add it to the memory store (this will work well only if the underlying memory implementation does retrieve the right chunk)
For the use case you mention above (simply the ability to continue a conversation), I think load/save state is the way to go.
model_client = OpenAIChatCompletionClient(model="gpt-4o-2024-08-06")
# Define a team.
assistant_agent = AssistantAgent(
name="assistant_agent",
system_message="You are a helpful assistant",
model_client=model_client,
)
agent_team = RoundRobinGroupChat([assistant_agent], termination_condition=MaxMessageTermination(max_messages=2))
# Run the team and stream messages to the console.
stream = agent_team.run_stream(task="Write a beautiful poem 3-line about lake tangayika")
# Use asyncio.run(...) when running in a script.
await Console(stream)
# Save the state of the agent team.
team_state = await agent_team.save_state()
load as follows
print(team_state)
# Load team state.
await agent_team.load_state(team_state)
stream = agent_team.run_stream(task="What was the last line of the poem you wrote?")
await Console(stream)
Are you looking to implement this in your own application or you were more interested in working towards an implementation in autogenstudio?
Good day Victor,
I really appreciate your feedback. I plan on utilizing both Autogen and AutogenStudio but most of my time is spent experimenting with autogenstudio.
So, the most direct answer is implementation in AutogenStudio.
Got it.
A good first step would be to tryout load/save state api At the same time, we can work together on an early implementation in AutoGen Studio (to enable experimentation). Are you open to working on a PR?
Thank you for the information about the load/save state and yes, I am open to working on a PR.
I am not able to implement Save state load in Autogen Studio. Can you provide a guide for it.