langchain
langchain copied to clipboard
Unable to provide llm_chain (instead of llm) to initialize_agent() while initializing the agent
System Info
LangChain version : 0.0.158 Python version: 11 Mac OS Ventura 13.2.1
Who can help?
No response
Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
Reproduction
I am using an agent to integrate with the "serpapi" tool. But, I also need to partially initialize a prompt to inform OpenAI on how to use the serpapi tool. For example, I want to set up the prompt with the current_date, before OpenAPI starts interacting with serp_api. To solve this problem, I am trying to use llm_chain
as the parameter instead of an llm
instance. But, currently the initialize_agent
function only accepts an instance of llm
and not the llm_agent
tool.
def run_search_chain_serpapi(user_input, current_date=None):
if not current_date:
current_date = f"{datetime.datetime.now():%Y-%m-%d}"
llm = OpenAI(temperature=0)
prompt = PromptTemplate(
input_variables=["input", "current_date"],
template=_DEFAULT_SEARCH_PROMPT
)
partial_prompt = prompt.partial(current_date="2023-05-09")
llm_chain = LLMChain(llm=llm, prompt=partial_prompt)
tools = load_tools([setup_serpi_tool()])
agent = initialize_agent(tools, llm_chain, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
result = agent(user_input)
return result
I get the following error, while running this function:
ValidationError: 1 validation error for LLMChain
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, generate_prompt (type=type_error)
Expected behavior
I believe given the LangChain is composable, the agent
should be able to also accept an instance of llm_chain
and not just plain llm
instances.
How about this?
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=verbose, max_iterations = max_iterations)
That seems to have eliminated the initialization issue. But, then the OutputParse is failing. Wonder, if I need to set up an OutputParse expicitly for the llm_chain. Perhaps, the llm_chain output is not compatible with what the agent is expecting?
File ~/src/.env11/lib/python3.11/site-packages/langchain/agents/mrkl/output_parser.py:26, in MRKLOutputParser.parse(self, text)
24 match = re.search(regex, text, re.DOTALL)
25 if not match:
---> 26 raise OutputParserException(f"Could not parse LLM output: `{text}`")
27 action = match.group(1).strip()
28 action_input = match.group(2)
OutputParserException: Could not parse LLM output: `Articles Found:
Hi @gdevanla did you figure out how to do parse the output? I am also facing this issue while using ChatOpenAI in an llm_chain with SQL database agent.
Thought:Could not parse LLM output: `I will now construct a query to find the top performing sources for this month where the project_id is UM8Fm4BG1W. I will limit the query to 20 results and return the data in JSON format
It works fine when I use OpenAI instead of ChatOpenAI but OpenAI does not support GPT chat models.
@RamlahAziz I found a workaround for the time being. I took the original prompt and updated it to the way I wanted. Then, after constructing the agent, updated the prompt. I know this is bit of a hack, but it works for the time being. Here is an example:
llm = OpenAI(
temperature=0,
max_tokens=2000,
)
tools = load_tools([], llm=llm)
tools.append(NewsTool)
memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True
)
agent = initialize_agent(
tools,
llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
)
# THIS IS A HACK
agent.agent.llm_chain.prompt.template = "My custom template here. It should contain the following placeholders {'input', 'chat_history', 'agent_scratchpad'}"
Thanks for this @gdevanla, I actually ended up making my own agent with GPT-4. There are also some suggestions on this thread #5876 but they are for an SQL Agent, not an SQL chain
Hi, @gdevanla! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue is that the initialize_agent
function only accepts an instance of llm
and not llm_chain
. User ronsamuel84629
suggested a workaround by initializing the agent with llm_chain
and allowed_tools
. However, you encountered an issue with the OutputParse
failing. Another user, RamlahAziz
, also faced a similar issue with parsing the output. You found a workaround by updating the prompt after constructing the agent, but RamlahAziz
ended up creating their own agent with GPT-4.
Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain project!