Issue with input variables in conversational agents with memory
I'm trying to create a chatbot which needs an agent and memory. I'm having issues getting ConversationBufferWindowMemory, ConversationalAgent, and ConversationChain to work together. A minimal version of the code is as follows:
memory = ConversationBufferWindowMemory(
k=3, buffer=prev_history, memory_key="chat_history")
prompt = ConversationalAgent.create_prompt(
tools,
prefix="You are a chatbot answering a customer's questions.{context}",
suffix="""
Current conversation:
{chat_history}
Customer: {input}
Ai:""",
input_variables=["input", "chat_history", "context"]
)
llm_chain = ConversationChain(
llm=OpenAI(temperature=0.7),
prompt=prompt,
memory=memory
)
agent = ConversationalAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
response = agent_executor.run(input=user_message, context=context, chat_history=memory.buffer)
return response
When I run the code with an input, I get the following error, Got unexpected prompt input variables. The prompt expects ['input', 'chat_history', 'context'], but got ['chat_history'] as inputs from memory, and input as the normal input key. (type=value_error)
If I remove the memory arg from ConversationChain, it will work without throwing errors, but obviously without memory. Looking through the source code, it looks like there is an issue with having a mismatch between input_variables in the Prompt and memory_key and input_key in the Memory. It doesn't seem like desired behavior, but I haven't seen any examples that use an agent and memory for a conversation in the same way that I'm trying to do.
If anyone is trying to do the same thing I found a workaround that fits our use-case. We're trying to pass the chat history back and forth in http requests to our chat service, which I believe is something that this isn't designed to do, or that functionality isn't added yet.
We were able to get the desired functionality by removing ConversationBufferWindowMemory entirely and just keeping track of the memory/inserting it into the prompt suffix manually. It doesn't appear we were gaining much from trying to use the built-in memory. The only thing to look out for is keeping the chat history to a reasonable size and adding in our own summarization logic if we decide we need it.
I am having the same issue trying to use agents and memory. Wish I had looked for issues before spending hours on it. I assumed it was me because I'm new to python and AI in general. The error I'm getting is:
Got unexpected prompt input variables. The prompt expects ['input', 'history', 'agent_scratchpad'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
Here's a link to a colab notebook with my code, https://colab.research.google.com/drive/14AVckBac_z4cF272mjrPT8qqKpGb8aE5?usp=sharing. Also, I pprinted the prompt variable, it looks like this:
ChatPromptTemplate(
input_variables=["input", "history", "agent_scratchpad"],
output_parser=None,
partial_variables={},
messages=[
SystemMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=[],
output_parser=None,
partial_variables={},
template="Answer the following questions as best you can. You have access to the following tools:\n\n> Search: useful for when you need to answer questions about current events\n> Wolfram Alpha: a computational knowledge engine that provides factual answers to a wide range of questions and does math computations.\n\nTo use a tool, please use the following format:\n\n\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [Search, Wolfram Alpha]\nAction Input: the input to the action\nObservation: the result of the action\n\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n\nThought: Do I need to use a tool? No\nAI: [your response here]\n\n\nBegin!",
template_format="f-string",
validate_template=True,
),
additional_kwargs={},
),
HumanMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=["agent_scratchpad", "input"],
output_parser=None,
partial_variables={},
template="\n {input}\n This was your previous work \n (but I haven't seen any of it! I only see what \n you return as final answer):\n {agent_scratchpad}",
template_format="f-string",
validate_template=True,
),
additional_kwargs={},
),
MessagesPlaceholder(variable_name="history"),
],
)
I'm having the same issue as @bmyers427 when using the ChatAgent class
cm = CallbackManager([StreamingStdOutCallbackHandler()])
chat = ChatOpenAI(
openai_api_key=OPENAI_API_KEY,
streaming=True,
temperature=0,
max_tokens=2056,
verbose=True,
callback_manager=cm,
)
memory = ConversationBufferWindowMemory(k=1, return_messages=True)
agent = ChatAgent.from_llm_and_tools(
llm=chat,
tools=[
# ...
],
prefix=PREFIX,
suffix=SUFFIX,
format_instructions=FORMAT_INSTRUCTIONS,
input_variables=input_variables,
memory=memory,
)
ae = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=[
# ...
],
memory=memory,
verbose=True,
)
Has anyone managed to figure this out?
If anyone is trying to do the same thing I found a workaround that fits our use-case. We're trying to pass the chat history back and forth in http requests to our chat service, which I believe is something that this isn't designed to do, or that functionality isn't added yet.
We were able to get the desired functionality by removing
ConversationBufferWindowMemoryentirely and just keeping track of the memory/inserting it into the prompt suffix manually. It doesn't appear we were gaining much from trying to use the built-in memory. The only thing to look out for is keeping the chat history to a reasonable size and adding in our own summarization logic if we decide we need it.
Could you please provide the minimal code for it. As i'm working on the same use case.
Likewise here, this is blocking our use-case at the moment and it would be great to get at least a workaround to this.
Small reproduction:
from langchain import LLMChain, OpenAI, PromptTemplate
from langchain.memory import ConversationBufferWindowMemory
x = LLMChain(
llm=ChatOpenAI(temperature=0.95),
prompt=PromptTemplate(
input_variables=["input_1", "input_2", "input_3", "history"],
template="""
{input_1}
{input_2}
{history}
Client: {input_3}
AI:""",
),
verbose=True,
memory=ConversationBufferWindowMemory(ai_prefix="AI"),
)
What others propose as a fix is to handle the memory yourself and pass it in as a variable.
I am having the same issue trying to use agents and memory. Wish I had looked for issues before spending hours on it. I assumed it was me because I'm new to python and AI in general. The error I'm getting is:
Got unexpected prompt input variables. The prompt expects ['input', 'history', 'agent_scratchpad'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
Here's a link to a colab notebook with my code, https://colab.research.google.com/drive/14AVckBac_z4cF272mjrPT8qqKpGb8aE5?usp=sharing. Also, I pprinted the prompt variable, it looks like this:
ChatPromptTemplate( input_variables=["input", "history", "agent_scratchpad"], output_parser=None, partial_variables={}, messages=[ SystemMessagePromptTemplate( prompt=PromptTemplate( input_variables=[], output_parser=None, partial_variables={}, template="Answer the following questions as best you can. You have access to the following tools:\n\n> Search: useful for when you need to answer questions about current events\n> Wolfram Alpha: a computational knowledge engine that provides factual answers to a wide range of questions and does math computations.\n\nTo use a tool, please use the following format:\n\n
\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [Search, Wolfram Alpha]\nAction Input: the input to the action\nObservation: the result of the action\n\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n\nThought: Do I need to use a tool? No\nAI: [your response here]\n\n\nBegin!", template_format="f-string", validate_template=True, ), additional_kwargs={}, ), HumanMessagePromptTemplate( prompt=PromptTemplate( input_variables=["agent_scratchpad", "input"], output_parser=None, partial_variables={}, template="\n {input}\n This was your previous work \n (but I haven't seen any of it! I only see what \n you return as final answer):\n {agent_scratchpad}", template_format="f-string", validate_template=True, ), additional_kwargs={}, ), MessagesPlaceholder(variable_name="history"), ], )
I also encounter the same problem
As per @hwchase17's comment in this thread, ConversationChain does not support input variables other than the input key.
Hi, @bmyers427! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, there is currently a mismatch between the input_variables in the prompt and the memory_key and input_key in the memory. While removing the memory argument allows the code to work without errors, it also removes the memory functionality. Some users have found workarounds by manually keeping track of the memory and inserting it into the prompt suffix. This issue has not been resolved yet.
Could you please let us know if this issue is still relevant to the latest version of the LangChain repository? If it is, please comment on this issue to let the LangChain team know. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain project! Let us know if you have any further questions or concerns.
Has anyone fixed this issue yet?
Has anyone fixed this issue yet?
yeah just set return_messages=True on memory this should solve your issue it did for me