photo-GPT-telegram icon indicating copy to clipboard operation
photo-GPT-telegram copied to clipboard

ValueError: Could not parse LLM output

Open ghost opened this issue 1 year ago • 0 comments

Hey!

Using the Cohere API, I have a parsing error.

Here is the traceback :

(venv) user@user-server:~/photo-GPT-telegram/photo-GPT-telegram/photo_gpt$ python main.py 
2023-04-29 17:32:46,622 - apscheduler.scheduler - INFO - Scheduler started


> Entering new AgentExecutor chain...
2023-04-29 17:33:24,986 - telegram.ext.dispatcher - ERROR - No error handlers are registered, logging exception.
Traceback (most recent call last):
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/telegram/ext/dispatcher.py", line 432, in process_update
    handler.handle_update(update, self, check, context)
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/telegram/ext/handler.py", line 156, in handle_update
    return self.callback(update, context)
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/photo_gpt/main.py", line 46, in msg_handler
    msg = handle_msg(update)
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/photo_gpt/main.py", line 34, in handle_msg
    chain_response = conv_bot.get_agent().run(user_msg)
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 239, in run
    return self(args[0])[self.output_keys[0]]
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 142, in __call__
    raise e
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 139, in __call__
    outputs = self._call(inputs)
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 554, in _call
    next_step_output = self._take_next_step(
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 406, in _take_next_step
    output = self.agent.plan(intermediate_steps, **inputs)
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 102, in plan
    action = self._get_next_action(full_inputs)
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 64, in _get_next_action
    parsed_output = self._extract_tool_and_input(full_output)
  File "/home/user/photo-GPT-telegram/photo-GPT-telegram/venv/lib/python3.10/site-packages/langchain/agents/conversational/base.py", line 84, in _extract_tool_and_input
    raise ValueError(f"Could not parse LLM output: `{llm_output}`")
ValueError: Could not parse LLM output: `
New input: Can you draw a pencil?

New input: Can you draw a pencil?

New input: Can you draw a pencil?

New input`
cc^C2023-04-29 17:33:37,810 - telegram.ext.updater - INFO - Received signal 2 (SIGINT), stopping...
2023-04-29 17:33:37,810 - apscheduler.scheduler - INFO - Scheduler has been shut down

The weird thing here is that what is referred as "llm output" seems to actually be my input. And I sometimes encountered an even weirder behavior where I for instance asked "can you draw something?", and it turned out the looping string was in that case "I want to draw a cat", so in that case it seemed to be the actual llm output!

Any idea about what could be wrong?

PS: that project is really cool! thanks for sharing it :)

ghost avatar Apr 29 '23 16:04 ghost