langchain icon indicating copy to clipboard operation
langchain copied to clipboard

OutputParserException: Could not parse LLM output

Open abhinavkulkarni opened this issue 1 year ago • 0 comments

System Info

Linux knockdhu 5.4.0-139-generic #156-Ubuntu SMP Fri Jan 20 17:27:18 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Who can help?

@hwchase17 @agola11 @vowelparrot

Information

  • [X] The official example notebooks/scripts
  • [ ] My own modified scripts

Related Components

  • [X] LLMs/Chat Models
  • [ ] Embedding Models
  • [ ] Prompts / Prompt Templates / Prompt Selectors
  • [ ] Output Parsers
  • [ ] Document Loaders
  • [ ] Vector Stores / Retrievers
  • [ ] Memory
  • [X] Agents / Agent Executors
  • [X] Tools / Toolkits
  • [X] Chains
  • [ ] Callbacks/Tracing
  • [ ] Async

Reproduction

import os
import torch
from dotenv import load_dotenv

from langchain import HuggingFacePipeline, ConversationChain
from langchain import PromptTemplate, LLMChain
from langchain.llms import OpenAI
from langchain.tools import DuckDuckGoSearchRun
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.tools import BaseTool, StructuredTool, Tool, tool


load_dotenv()


# Load LLM
model_id = "stabilityai/stablelm-tuned-alpha-3b"
llm = HuggingFacePipeline.from_model_id(
    model_id=model_id, 
    task="text-generation", 
    model_kwargs={"temperature":0, "max_length":512, "torch_dtype":torch.float16, "load_in_8bit":True, "device_map":"auto"})


# Load tools and create an agent
tools = load_tools(["llm-math"], llm=llm)
tools += [DuckDuckGoSearchRun()]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)


# Following works
template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)

question = "What is electroencephalography? "

print(llm_chain.run(question))


# Following throws an error
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")

I get the following output:

Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.


> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
OutputParserException                     Traceback (most recent call last)
Cell In[4], line 1
----> 1 agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")

File [/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py:238](https://vscode-remote+ssh-002dremote-002bknockdhu-002econcentricai-002ecom.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py:238), in Chain.run(self, callbacks, *args, **kwargs)
    236     if len(args) != 1:
    237         raise ValueError("`run` supports only one positional argument.")
--> 238     return self(args[0], callbacks=callbacks)[self.output_keys[0]]
    240 if kwargs and not args:
    241     return self(kwargs, callbacks=callbacks)[self.output_keys[0]]

File [/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py:142](https://vscode-remote+ssh-002dremote-002bknockdhu-002econcentricai-002ecom.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py:142), in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    140 except (KeyboardInterrupt, Exception) as e:
    141     run_manager.on_chain_error(e)
--> 142     raise e
    143 run_manager.on_chain_end(outputs)
    144 return self.prep_outputs(inputs, outputs, return_only_outputs)

File [/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py:136](https://vscode-remote+ssh-002dremote-002bknockdhu-002econcentricai-002ecom.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py:136), in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    130 run_manager = callback_manager.on_chain_start(
    131     {"name": self.__class__.__name__},
    132     inputs,
    133 )
    134 try:
    135     outputs = (
--> 136         self._call(inputs, run_manager=run_manager)
    137         if new_arg_supported
    138         else self._call(inputs)
    139     )
    140 except (KeyboardInterrupt, Exception) as e:
    141     run_manager.on_chain_error(e)

File [/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py:905](https://vscode-remote+ssh-002dremote-002bknockdhu-002econcentricai-002ecom.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py:905), in AgentExecutor._call(self, inputs, run_manager)
    903 # We now enter the agent loop (until it returns something).
    904 while self._should_continue(iterations, time_elapsed):
--> 905     next_step_output = self._take_next_step(
    906         name_to_tool_map,
    907         color_mapping,
    908         inputs,
    909         intermediate_steps,
    910         run_manager=run_manager,
    911     )
    912     if isinstance(next_step_output, AgentFinish):
    913         return self._return(
    914             next_step_output, intermediate_steps, run_manager=run_manager
    915         )

File [/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py:749](https://vscode-remote+ssh-002dremote-002bknockdhu-002econcentricai-002ecom.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py:749), in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    747 except Exception as e:
    748     if not self.handle_parsing_errors:
--> 749         raise e
    750     text = str(e).split("`")[1]
    751     observation = "Invalid or incomplete response"

File [/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py:742](https://vscode-remote+ssh-002dremote-002bknockdhu-002econcentricai-002ecom.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py:742), in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    736 """Take a single step in the thought-action-observation loop.
    737 
    738 Override this to take control of how the agent makes and acts on choices.
    739 """
    740 try:
    741     # Call the LLM to see what to do.
--> 742     output = self.agent.plan(
    743         intermediate_steps,
    744         callbacks=run_manager.get_child() if run_manager else None,
    745         **inputs,
    746     )
    747 except Exception as e:
    748     if not self.handle_parsing_errors:

File [/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py:426](https://vscode-remote+ssh-002dremote-002bknockdhu-002econcentricai-002ecom.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py:426), in Agent.plan(self, intermediate_steps, callbacks, **kwargs)
    424 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
    425 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
--> 426 return self.output_parser.parse(full_output)

File [/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/mrkl/output_parser.py:26](https://vscode-remote+ssh-002dremote-002bknockdhu-002econcentricai-002ecom.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/mrkl/output_parser.py:26), in MRKLOutputParser.parse(self, text)
     24 match = re.search(regex, text, re.DOTALL)
     25 if not match:
---> 26     raise OutputParserException(f"Could not parse LLM output: `{text}`")
     27 action = match.group(1).strip()
     28 action_input = match.group(2)

OutputParserException: Could not parse LLM output: ` I know the high temperature in SF yesterday in Fahrenheit
Action: I now know the high temperature in SF yesterday in Fahrenheit`

Expected behavior

If I use OpenAI LLM, I get the expected output.

Please let me know how to solve this issue as I want to experiment with open-source LLMs.

abhinavkulkarni avatar May 06 '23 06:05 abhinavkulkarni