langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Exception when Conversation Agent doesn't receive json output

Open alexiri opened this issue 1 year ago • 6 comments

I'm trying to create a conversation agent essentially defined like this:

tools = load_tools([]) # "wikipedia"])

llm = ChatOpenAI(model_name=MODEL, verbose=True)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

agent = initialize_agent(tools, llm,
    agent="chat-conversational-react-description",
    max_iterations=3,
    early_stopping_method="generate",
    memory=memory,
    verbose=True)

The agent raises an exception after it tries to use an invalid tool.

Question: My name is James and I'm helping Will. He's an engineer.


> Entering new AgentExecutor chain...
{
    "action": "Final Answer",
    "action_input": "Hello James, nice to meet you! How can I assist you and Will today?"
}

> Finished chain.
Answer: Hello James, nice to meet you! How can I assist you and Will today?
Question: What do you know about Will?


> Entering new AgentExecutor chain...
{
    "action": "recommend_tool",
    "action_input": "I recommend searching for information on Will on LinkedIn, which is a professional networking site. It may have his work experience, education and other professional details."
}
Observation: recommend_tool is not a valid tool, try another one.
Thought:Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/conversational_chat/base.py", line 106, in _extract_tool_and_input
    response = self.output_parser.parse(llm_output)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/conversational_chat/base.py", line 51, in parse
    response = json.loads(cleaned_output)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/src/app/main.py", line 93, in <module>
    ask(question)
  File "/usr/src/app/main.py", line 76, in ask
    result = agent.run(question)
             ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 213, in run
    return self(args[0])[self.output_keys[0]]
           ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 116, in __call__
    raise e
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 113, in __call__
    outputs = self._call(inputs)
              ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 632, in _call
    next_step_output = self._take_next_step(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 548, in _take_next_step
    output = self.agent.plan(intermediate_steps, **inputs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 281, in plan
    action = self._get_next_action(full_inputs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 243, in _get_next_action
    parsed_output = self._extract_tool_and_input(full_output)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/conversational_chat/base.py", line 109, in _extract_tool_and_input
    raise ValueError(f"Could not parse LLM output: {llm_output}")
ValueError: Could not parse LLM output: My apologies, allow me to clarify my previous response:

{
    "action": "recommend_tool",
    "action_input": "I recommend using a professional social network which can provide informative details on Will's professional background and accomplishments."
} 

You can try searching for information on platforms such as LinkedIn or XING to start.
>>>

Perhaps this is because there are no tools defined? This may be somewhat related to #2241, which is also a parsing error.

My real goal here was trying to test the agent's memory, but if I defined wikipedia as a tool, the agent would try to search for Will in wikipedia and respond with facts about some random Will. How can I get this to work?

alexiri avatar Apr 01 '23 18:04 alexiri

I got this error as well, a lot of the difficulties stem from needing to parse the output and LLMs not always returning the output in the format that we need it or want it. I have been relying on conversational-react-description instead of conversational-chat for this reason.

exiao avatar Apr 13 '23 21:04 exiao

@exiao Even using the conversational-react-description or chat-conversational-react-description doesnt solve the issue of JsonDecodeError.

KeshavSingh29 avatar May 11 '23 01:05 KeshavSingh29

I feel like this has started happening way more often since I started providing a custom system prompt.

edit: using conversational-react-description in our case

mcantrell avatar May 25 '23 11:05 mcantrell

I am also running into this, going through the langchain quickstart guide.

jamesshapiro avatar May 27 '23 09:05 jamesshapiro

Note: This is a pretty hacky solution, but I noticed that the error was prefixing the desired JSON with some non-json text. Editing /opt/homebrew/lib/python3.11/site-packages/langchain/output_parsers/json.py as follows:

def parse_json_markdown(json_string: str) -> dict:
    if '```' in json_string:
        json_string = '```' + ''.join(json_string.split('```', 1)[1])

Solved the issue for me at least temporarily

jamesshapiro avatar May 27 '23 09:05 jamesshapiro

Openai api returns "" to parser and tries again. So parser tries to format but fails bc of the empty string. So edit {copied-path}/langchain/agents/conversational_chat/output_parser.py to:

from __future__ import annotations

import json
from typing import Union
import regex

from langchain.agents import AgentOutputParser
from langchain.agents.conversational_chat.prompt import FORMAT_INSTRUCTIONS
from langchain.schema import AgentAction, AgentFinish


class ConvoOutputParser(AgentOutputParser):
    def get_format_instructions(self) -> str:
        return FORMAT_INSTRUCTIONS

    def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
        cleaned_output = text.strip()

        if text:
            if "```json" in cleaned_output:
                _, cleaned_output = cleaned_output.split("```json")
            if "```" in cleaned_output:
                cleaned_output, _ = cleaned_output.split("```")
            if cleaned_output.startswith("```json"):
                cleaned_output = cleaned_output[len("```json") :]
            if cleaned_output.startswith("```"):
                cleaned_output = cleaned_output[len("```") :]
            if cleaned_output.endswith("```"):
                cleaned_output = cleaned_output[: -len("```")]
            if cleaned_output and text:
                if not cleaned_output.endswith("""\n}"""):
                    pattern = r"(\{(?:[^{}]|(?R))*\})"
                    try:
                        cleaned_output = regex.search(pattern, text).group(0)
                    except:
                        return AgentFinish({"output": "Final Answer"}, "")
            cleaned_output = cleaned_output.strip()

            if cleaned_output:
                response = json.loads(cleaned_output)
                action = response["action"]
                try:
                    action_input = response["action_input"]
                except:
                    action_input = ""
            if action == "Final Answer":
                return AgentFinish({"output": action_input}, text)
            else:
                return AgentAction(action, action_input, text)

        return AgentFinish({"output": ""}, "")

Hope its usefull. ❤️

ghost avatar Jul 02 '23 15:07 ghost

Hi, @alexiri! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue you reported is related to the conversation agent failing to parse the output when an invalid tool is used. It seems that other users have also experienced this issue and have suggested using conversational-react-description instead of conversational-chat as a potential solution. Some users have even provided temporary solutions by editing certain files in the repository to address the parsing error.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your contribution to the LangChain repository!

dosubot[bot] avatar Oct 09 '23 16:10 dosubot[bot]