langchain icon indicating copy to clipboard operation
langchain copied to clipboard

ValueError: Could not parse LLM output: on 'chat-conversational-react-description'

Open Wzy232303 opened this issue 2 years ago • 1 comments
trafficstars

Error with the AgentOutputParser() when I follow the notebook "Conversation Agent (for Chat Models)" `> Entering new AgentExecutor chain... Traceback (most recent call last): File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/site-packages/langchain/agents/conversational_chat/base.py", line 106, in _extract_tool_and_input response = self.output_parser.parse(llm_output) File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/site-packages/langchain/agents/conversational_chat/base.py", line 51, in parse response = json.loads(cleaned_output) File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/json/init.py", line 357, in loads return _default_decoder.decode(s) File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "demo0_0_4.py", line 119, in sys.stdout.write(agent_executor(query)['output']) File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in call raise e File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in call outputs = self._call(inputs) File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/site-packages/langchain/agents/agent.py", line 637, in _call next_step_output = self._take_next_step( File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/site-packages/langchain/agents/agent.py", line 553, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/site-packages/langchain/agents/agent.py", line 286, in plan action = self._get_next_action(full_inputs) File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/site-packages/langchain/agents/agent.py", line 248, in _get_next_action parsed_output = self._extract_tool_and_input(full_output) File "/Users/wzy/opt/anaconda3/envs/py3.8/lib/python3.8/site-packages/langchain/agents/conversational_chat/base.py", line 109, in _extract_tool_and_input raise ValueError(f"Could not parse LLM output: {llm_output}") ValueError: Could not parse LLM output: , wo xiang zhao yi ge hao de zhongwen yuyan xuexiao`

Wzy232303 avatar Apr 06 '23 11:04 Wzy232303

Same problem here. In the documentation doesn't appear anything about the format of the json expected when using CHAT_CONVERSATIONAL_REACT_DESCRIPTION or at least is not obvious where to find it.

Any clues?

josemlopez avatar Apr 27 '23 13:04 josemlopez

Hi, @Wzy232303! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you encountered a ValueError while trying to parse the LLM output in the 'chat-conversational-react-description' notebook. It seems that the error message indicates that the LLM output cannot be parsed. Another user, @josemlopez, has also encountered the same problem and is seeking clarification on the expected format of the JSON when using CHAT_CONVERSATIONAL_REACT_DESCRIPTION.

Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.

Thank you for your understanding and cooperation. We look forward to hearing from you soon.

dosubot[bot] avatar Sep 19 '23 16:09 dosubot[bot]