langchainjs
langchainjs copied to clipboard
add example of memory to agent
Second this! I would like to either have a chain that can use tools, or an agent that has memory, or some way to combine the two but I can't find an example of how that would work. If there is existing documentation for the python package I'm happy to translate it to javascript.
EDIT: After looking around it seems like this functionality is not yet in the JavaScript package.
I am going to attempt to add this functionality and update the documentation. I would like to assign this issue to me (unless you are already working on it @hwchase17 ).
This is areal p
No description provided.
This is a real problem. Do you find the solution?
I am going to attempt to add this functionality and update the documentation. I would like to assign this issue to me (unless you are already working on it @hwchase17 ).
Did you find the solution. It a real problem and no exemple anywhere
Also having an issue with this. The agent works fine without memory, but when I add memory to the executor the expected output is logged, but never returned. Keep getting timeouts.
(Edit) For reference, this is how I added memory, there were instances where the correct output was returned, but it times out 9 times out of 10 and from the logs it looks like it loses context.
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: 'chat-conversational-react-description',
verbose: true,
});
const memory = new BufferMemory({
chatHistory: new ChatMessageHistory(history),
memoryKey: 'chat_history',
returnMessages: true,
});
executor.memory = memory;
const res = await executor.call({
input: userMessage,
});
@deonvisser44 that executor already sets up memory internally
If you'd like to use custom memory, you can pass the memory as part of the options when using that initializer:
const customMemory = new BufferMemory({
chatHistory: new ChatMessageHistory(history),
memoryKey: 'chat_history',
returnMessages: true,
});
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: 'chat-conversational-react-description',
memory: customMemory,
verbose: true,
});
const res = await executor.call({
input: userMessage,
});
For tools that require memory, I found that adding the memory to the tool's constructor works. This is a similar pattern to how the ConversationChain
works. Expanding on the example above w/ a tool with memory:
export class ToolWithMemory extends Tool {
constructor(memory: <MemoryType>) {
super();
this.memory = memory
}
async _call(input: string) {
// Do something with memory
// this.memory.something
return some_process(input)
}
}
const customMemory = new BufferMemory({
chatHistory: new ChatMessageHistory(history),
memoryKey: 'chat_history',
returnMessages: true,
});
// Use the same memory as the agent or you can also setup a separate one specific to holding "context" outside of the "chatHistory".
tools = [ToolWithMemory(memory: customMemory)]
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: 'chat-conversational-react-description',
memory: customMemory,
verbose: true,
});
const res = await executor.call({
input: userMessage,
});
Hi, @hwchase17! I'm here to help the LangChain team manage their backlog and I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue you raised is requesting an example of how to add memory to an agent. It seems that there has been some discussion on this topic, with users "dantheman0207" and "iloukou" expressing their interest in this functionality. "dantheman0207" even offered to add it and update the documentation. Additionally, user "deonvisser44" mentioned an issue with adding memory to the executor.
However, user "ChanChar" stepped in and provided a solution by explaining how to use custom memory with the executor and even provided some code examples. Based on this, it seems that the issue has been resolved.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your contribution to the LangChain project!
@ChanChar how can we add the memory to agentType: "chat-zero-shot-react-description",
Could u please give any sample to understand
@jacoblee93 Could you please help @Boopalanoptisol with their question on how to add memory to the agentType "chat-zero-shot-react-description"? They are looking for a sample to understand. Thank you!
@jacoblee93 how can we add Memory to the dynamic tool by using agentType: "zero-shot-react-description",. So that in the chatbot if the user ask question related to the previous question, it will give correct response.
import { OpenAI } from "langchain/llms/openai"; import { initializeAgentExecutorWithOptions } from "langchain/agents"; import { DynamicTool } from "langchain/tools";
export const run = async () => { const model = new OpenAI({ temperature: 0 }); const tools = [ new DynamicTool({ name: "FOO", description: "call this to get the value of foo. input should be an empty string.", func: async () => "baz", }), new DynamicTool({ name: "BAR", description: "call this to get the value of bar. input should be an empty string.", func: async () => "baz1", }), ];
const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", });
console.log("Loaded agent.");
const input = What is the value of foo?
;
console.log(Executing with input "${input}"...
);
const result = await executor.call({ input });
console.log(Got output ${result.output}
);
};
@jacoblee93 how can we add Memory to the dynamic tool by using agentType: "zero-shot-react-description",. So that in the chatbot if the user ask question related to the previous question, it will give correct response.
import { OpenAI } from "langchain/llms/openai"; import { initializeAgentExecutorWithOptions } from "langchain/agents"; import { DynamicTool } from "langchain/tools";
export const run = async () => { const model = new OpenAI({ temperature: 0 }); const tools = [ new DynamicTool({ name: "FOO", description: "call this to get the value of foo. input should be an empty string.", func: async () => "baz", }), new DynamicTool({ name: "BAR", description: "call this to get the value of bar. input should be an empty string.", func: async () => "baz1", }), ];
const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", });
console.log("Loaded agent.");
const input =
What is the value of foo?
;console.log(
Executing with input "${input}"...
);const result = await executor.call({ input });
console.log(
Got output ${result.output}
); };
Is there an answer for this as I too require something similar
🤖
Hi, @hwchase17,
I'm helping the langchainjs team manage their backlog and am marking this issue as stale. From what I understand, the issue was raised to request an example of how to add memory to an agent. There were subsequent discussions and questions from other users about adding memory to specific agent types, indicating that the issue may still be relevant. However, user <ChanChar> has provided a solution by explaining how to use custom memory with the executor and providing code examples.
Could you please confirm if this issue is still relevant to the latest version of the langchainjs repository? If it is, please let the langchainjs team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to langchainjs!