`initialize_agent` does not work with `return_intermediate_steps=True`
E.g. running
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True)
agent.run("What is 2 raised to the 0.43 power?")
gives the error
203 """Run the chain as text in, text out or multiple variables, text out."""
204 if len(self.output_keys) != 1:
--> 205 raise ValueError(
206 f"`run` not supported when there is not exactly "
207 f"one output key. Got {self.output_keys}."
208 )
210 if args and not kwargs:
211 if len(args) != 1:
ValueError: `run` not supported when there is not exactly one output key. Got ['output', 'intermediate_steps'].
Is this supposed to be called differently or how else can the intermediate outputs ("Observations") be retrieved?
Call the agent directly on the input like this:
agent("What is 2 raised to the 0.43 power?")
This will return a dict with keys "input", "output", and "intermediate_steps".
I just tried running agent("test"), agent(input='test'), agent(dict(input='test')) and all of them raise errors.
agent("test") and agent(dict(input='test')) raise:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "~/Library/Caches/pypoetry/virtualenvs/langflow-zotWOIqD-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 118, in __call__
return self.prep_outputs(inputs, outputs, return_only_outputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/Library/Caches/pypoetry/virtualenvs/langflow-zotWOIqD-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 170, in prep_outputs
self.memory.save_context(inputs, outputs)
File "~/Library/Caches/pypoetry/virtualenvs/langflow-zotWOIqD-py3.11/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 28, in save_context
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['output', 'intermediate_steps'])
And agent(input='test') raises:
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: Chain.__call__() got an unexpected keyword argument 'input'
Another evidence:
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory
llm = OpenAI(
temperature=0,
model_name="text-davinci-002",
openai_api_key="sk-",
)
tools = []
agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description",
verbose=True,
return_intermediate_steps=True,
memory=ConversationBufferMemory(memory_key="chat_history"),
)
response = agent(
{
"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
}
)
This raises the same error. The problem seems to be the memory.
Another evidence:
from langchain.agents import initialize_agent from langchain.llms import OpenAI from langchain.memory import ConversationBufferMemory llm = OpenAI( temperature=0, model_name="text-davinci-002", openai_api_key="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", ) tools = [] agent = initialize_agent( tools, llm, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True, memory=ConversationBufferMemory(memory_key="chat_history"), ) response = agent( { "input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?" } )This raises the same error. The problem seems to be the memory.
Define the input and output keys in your memory when you initialize the agent like this:
agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description",
verbose=True,
return_intermediate_steps=True,
memory=ConversationBufferMemory(memory_key="chat_history", input_key='input', output_key="output")
)
Thanks @wct432 ! Is it in the docs? I could not find the section mentioning that approach
I tested it too and it works. Thanks, @wct432!
No worries @ogabrielluiz @mzhadigerov, happy to help.
Regarding it being in the docs, I don't think so. It's been a while since I was looking into this, but it's something like the kwargs aren't passed from the memory to the agent unless they're set explicitly on initialization. Maybe @hwchase17 can comment if this is intended behavior or not.
I've the same issue with create_pandas_dataframe_agent (that unfortunaltely cannot be solved with your call to initialize_agent @wct432
I tried by replacing the AgentExecutor.from_agent_and_tools in create_pandas_dataframe_agent with a initialize_agent, but I don't know how to pass the prompt into initialize_agent
cc @hwchase17
I faced the same issue while using the chat-conversational-react-description agent. I tried overriding the ConversationBufferMemory as suggested in #3091
Solution that worked for me:
My Langchain execution utilizes the ConversationBufferMemory with memory key as chat-history
- Initializing the agent
The
initialize_agentreturns anAgentExecutorobject so it's important to have thereturn_intermediate_steps = Truefor the executor
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=ChatOpenAI(temperature=0.0),
memory=memory,
return_intermediate_steps= True, # Make sure you set it to True
verbose=True
)
- Memory allocation
I found that setting
input_keyandoutput_keyhelps to get a dictionary output with intermediate_steps. Also make sure thatreturn_messages=True
memory = ConversationBufferMemory(memory_key="chat_history", input_key="input", output_key="output",return_messages=True)
- Executing the chain Since we changed the input_key in the previous step, executing the chain will now have the following format
response = chain({"input":query})
print(response['intermediate_steps'])
Hope this helps!
I faced the same issue while using the
chat-conversational-react-descriptionagent. I tried overriding the ConversationBufferMemory as suggested in #3091Solution that worked for me: My Langchain execution utilizes the
ConversationBufferMemorywith memory key aschat-history
- Initializing the agent The
initialize_agentreturns anAgentExecutorobject so it's important to have thereturn_intermediate_steps = Truefor the executoragent = initialize_agent( agent='chat-conversational-react-description', tools=tools, llm=ChatOpenAI(temperature=0.0), memory=memory, return_intermediate_steps= True, # Make sure you set it to True verbose=True )
- Memory allocation I found that setting
input_keyandoutput_keyhelps to get a dictionary output with intermediate_steps. Also make sure thatreturn_messages=True
memory = ConversationBufferMemory(memory_key="chat_history", input_key="input", output_key="output",return_messages=True)
- Executing the chain Since we changed the input_key in the previous step, executing the chain will now have the following format
response = chain({"input":query}) print(response['intermediate_steps'])Hope this helps!
This throws:
File [c:\Users\user\.conda\envs\llm_env\lib\site-packages\langchain\chains\base.py:136](file:///C:/Users/user/.conda/envs/llm_env/lib/site-packages/langchain/chains/base.py:136), in Chain.__call__(self, inputs, return_only_outputs, callbacks)
130 run_manager = callback_manager.on_chain_start(
131 {"name": self.__class__.__name__},
132 inputs,
133 )
134 try:
135 outputs = (
--> 136 self._call(inputs, run_manager=run_manager)
137 if new_arg_supported
138 else self._call(inputs)
139 )
140 except (KeyboardInterrupt, Exception) as e:
141 run_manager.on_chain_error(e)
...
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
This seems like the LLM does not output in the expected JSON format! Anybody got any ideas about this behaviour? However
conversational-react-descriptionseems to be working fine!
Hey @ogabrielluiz, you better change that API key ;)
Hey @ogabrielluiz, you better change that API key ;)
It is not active for a while now, but thanks!
Hello, could someone assist me with this one?
suffix="""
Here they are:
{chat_history}
Question: {input}
{agent_scratchpad}
"""
prompt = ZeroShotAgent.create_prompt(
toolkit.get_tools(),
prefix='Look for the information of emails according to the past results',
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
memory = ConversationBufferMemory(memory_key="chat_history", input_key='input', output_key="output")
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent_2 = ZeroShotAgent(llm_chain=llm_chain, tools=toolkit.get_tools(), verbose=True, return_intermediate_steps=True)
agent_chain = AgentExecutor.from_agent_and_tools(agent=agent_2, tools=toolkit.get_tools(), verbose=True, memory=memory, intermediate_steps=True)
It's giving me the input, chat_history and output but not the intermediate steps
The above solutions work when there is no output parse exception. However, I'm not sure how I can get the intermediate steps when there is an output parse exception since this kind of exception seems not fixable. I'm using the following wrapper to ignore the exception but I do want to get the intermediate_steps when it occurs. Does anyone have any idea?
def agent_with_error_handle(input):
try:
response = agent(input)
return response
except ValueError as e:
error = str(e)
if not error.startswith("Could not parse LLM output: "):
raise e
output = error.removeprefix("Could not parse LLM output: ")
return output
Is there any way to make this work with SQLDatabaseChain.from_llm()? When setting return_intermediate_steps=True in the chain, returns the error ERROR:root:'run' not supported when there is not exactly one output key. Got ['result', 'intermediate_steps']. The suggested workarounds in the thread still don't work :(
sql_db_chain = SQLDatabaseChain.from_llm(
llm,
db,
prompt=few_shot_prompt,
use_query_checker=False,
verbose=True,
return_intermediate_steps=True,
)
sql_tool = Tool(
name='SQL tool',
func=sql_db_chain.run,
description="..."
)
tools = load_tools(
["llm-math"],
llm=llm
)
tools.append(sql_tool)
conversational_agent = initialize_agent(
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True,
memory=ConversationBufferMemory(memory_key="chat_history", input_key='input', output_key="output", return_messages=True),
)
yeah I still cant get any sort of memory to work with initialize_agent
@AlejandroGil hello, I faced the same problem, Finally I find the solution!!!
For your code, the solution may be:
sql_db_chain = SQLDatabaseChain.from_llm(
llm,
db,
prompt=few_shot_prompt,
use_query_checker=False,
verbose=True,
return_intermediate_steps=True,
intermediate_steps=['Action Input', 'Observation']. # add this line to define which `intermediate_steps` do you want, default is ['result', 'intermediate_steps']
)
Hi, @msieb1,
I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue you reported involves a problem with the initialize_agent function not working with the return_intermediate_steps=True parameter, leading to a ValueError when attempting to retrieve intermediate outputs. Several users have provided potential workarounds and solutions, and there have been discussions about handling output parse exceptions and addressing problems with specific agent types like chat-conversational-react-description and SQLDatabaseChain.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and cooperation.