langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Broken intermediate output / parsing is grossly unreliable

Open Kav-K opened this issue 2 years ago • 9 comments

Traceback (most recent call last):
  File "/home/gptbot/cogs/search_service_cog.py", line 322, in on_message
    response, stdout_output = await capture_stdout(
  File "/home/gptbot/cogs/search_service_cog.py", line 79, in capture_stdout
    result = await func(*args, **kwargs)
  File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 213, in run
    return self(args[0])[self.output_keys[0]]
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 116, in __call__
    raise e
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 113, in __call__
    outputs = self._call(inputs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 807, in _call
    output = self.agent.return_stopped_response(
  File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 515, in return_stopped_response
    full_output = self.llm_chain.predict(**full_inputs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 151, in predict
    return self(kwargs)[self.output_key]
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 116, in __call__
    raise e
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 113, in __call__
    outputs = self._call(inputs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 57, in _call
    return self.apply([inputs])[0]
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 118, in apply
    response = self.generate(input_list)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 61, in generate
    prompts, stop = self.prep_prompts(input_list)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 79, in prep_prompts
    prompt = self.prompt.format_prompt(**selected_inputs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/prompts/chat.py", line 127, in format_prompt
    messages = self.format_messages(**kwargs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/prompts/chat.py", line 186, in format_messages
    message = message_template.format_messages(**rel_params)
  File "/usr/local/lib/python3.9/dist-packages/langchain/prompts/chat.py", line 43, in format_messages
    raise ValueError(
ValueError: variable agent_scratchpad should be a list of base messages, got {
    "action": "Search-Tool",
    "action_input": "Who is Harald Baldr?"
}

Most of the time the agent can't parse it's own tool usage.

Kav-K avatar Apr 24 '23 16:04 Kav-K

I got the same error. not sure whether it's a new bug introduced.

jpzhangvincent avatar Apr 25 '23 02:04 jpzhangvincent

I think it was this commit that caused the issue, namely the concept of message lists. When i roll back to before the PR was committed the error is gone.

https://github.com/hwchase17/langchain/pull/2310

pressoholics avatar Apr 25 '23 16:04 pressoholics

@hwchase17 any insights on how to fix the bug to make it compatible with the current version?

jpzhangvincent avatar Apr 25 '23 17:04 jpzhangvincent

facing this exact same issue using chat conversational react description

sahil-lalani avatar May 10 '23 23:05 sahil-lalani

Have this same issue. this needs to be prioritized, conversational agents are absolutely broken for any sort of robust use case.

Kav-K avatar May 11 '23 01:05 Kav-K

I have the same error in the ConversationalChatAgent with most of my inputs.

axenov avatar May 12 '23 11:05 axenov

I have a pull request out for this.

The issue is that the agent doesn't always respond with proper markdown which makes the JSON invalid OR it doesn't include the json at the beginning of the markdown.

PR #4539

You can hot patch this by adding this output parser and then calling this new output parser as an argument. You'll need to do something like this to include hotpatch (FYI didn't test this code below so YMMV):

prompt = CharacterChatAgent.create_prompt(
        tools, 
        output_parser=new_output_parser,
        input_variables=["human_input", "chat_history", "agent_scratchpad"],
    )
    
    llm_chain = LLMChain(llm=ChatOpenAI(
        temperature=0.7), prompt=prompt)
    agent = CharacterChatAgent(
        llm_chain=llm_chain, tools=tools, verbose=True, output_parser=new_output_parser)
    agent_chain = AgentExecutor.from_agent_and_tools(
        agent=agent, 
        tools=tools, 
        memory=memory, 
    )

treppers avatar May 12 '23 15:05 treppers

Just re-read the original issue and recalled that when fixing this, I may have done more than just update the output_parser.

Refer to my other PR #4411 for a new chat agent. It adds additional functionality of a character's voice, but I also fixed many of these issues by modifying the prompt.

treppers avatar May 12 '23 16:05 treppers

I'm facing exactly the same issue. If the max_iterations is greater than 1, the agent raises ValueError

raise ValueError(
ValueError: variable agent_scratchpad should be a list of base messages, got {
    "action": "Conversation Knowledgebase",
    "action_input": "Can you please be more specific about what you need help with in the game?"
}

KeshavSingh29 avatar May 16 '23 09:05 KeshavSingh29

@treppers

I have a pull request out for this.

The issue is that the agent doesn't always respond with proper markdown which makes the JSON invalid OR it doesn't include the json at the beginning of the markdown.

PR #4539

You can hot patch this by adding this output parser and then calling this new output parser as an argument. You'll need to do something like this to include hotpatch (FYI didn't test this code below so YMMV):

I tried your patch with the new ConvoOutputParser class; however, I'm still getting the same error:

  File "/home/gene/endpoints/venv/lib/python3.10/site-packages/langchain/prompts/chat.py", line 43, in format_messages
    raise ValueError(
ValueError: variable agent_scratchpad should be a list of base messages, got {
    "action": "Price Lookup",
    "action_input": "Marble Madness"
}

Interestingly, on the latest langchain, this error only occurs when hitting the early_stop (ie. max_iterations=1) and early_stopping_method="generate", when it hits the stop:

Thought:

I now need to return a final answer based on the previous steps:

However, using the default early_stopping_method="force" causes no error and just returns the default string informing that the agent hit the max iterations.

gururise avatar Jun 09 '23 15:06 gururise

This issue still exists today, early stopping + generate seems to be completely broken

bgreal5 avatar Aug 29 '23 16:08 bgreal5

still exists

Kav-K avatar Aug 30 '23 03:08 Kav-K

I can confirm the issue still exists in version 0.0.302

MichalQU avatar Sep 27 '23 11:09 MichalQU

I think this has been fixed now.

Kav-K avatar Nov 07 '23 21:11 Kav-K

i don't think it has been fixed. Tried out latest version 0.1.0. Still getting the same error when using early_stopping_method="generate" and max_iteration. Can anybody help with the patch ?

armmarov avatar Jan 08 '24 23:01 armmarov

I am using 0.1.14 and still facing this issue. agent_scratchpad is being constructed as string but a list of messages is expected.

a-fatah avatar Apr 04 '24 03:04 a-fatah

yeah this is still super disgusting and absolutely broken

On Wed, Apr 3, 2024 at 11:39 PM Abdul Fatah @.***> wrote:

I am using 0.1.14 and still facing this issue. agent_scratchpad is being constructed as string but a list of messages is expected.

— Reply to this email directly, view it on GitHub https://github.com/langchain-ai/langchain/issues/3448#issuecomment-2036114699, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFBOJDJZ4VMXDU6NEJZG4KLY3TDPXAVCNFSM6AAAAAAXJZJXI2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZWGEYTINRZHE . You are receiving this because you modified the open/close state.Message ID: @.***>

Kav-K avatar Apr 04 '24 03:04 Kav-K

Having the same issue. @Kav-K have you closed the issue?

iharsuvorau avatar Apr 14 '24 14:04 iharsuvorau

Also still seeing the issue with v0.1.19 when following the agents quickstart tutorial:

ValueError: variable agent_scratchpad should be a list of base messages, got

kirill-vas avatar May 10 '24 20:05 kirill-vas

Also still seeing the issue with v0.1.19 when following the agents quickstart tutorial:

ValueError: variable agent_scratchpad should be a list of base messages, got

I initiated my agent as following which seems to have fixed it for me

from langchain.agents.format_scratchpad import format_log_to_str

agent = (
                {
                    "input": lambda x: x["input"],
                    "agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
                }
                | prompt
                | chat_model_with_stop
                | output_parser
        )

jsemrau avatar May 11 '24 08:05 jsemrau

Also what I noticed is that the error occurs when I pull the prompt from the hub.

prompt = hub.pull("hwchase17/react-chat-json")

and not when i hand copy/paste from the site manually and do this.

prompt = PromptTemplate.from_template(definition_prompts)
        prompt = prompt.partial(
            tools=render_text_description(tools),
            tool_names=", ".join([t.name for t in tools]),
        )

jsemrau avatar May 12 '24 14:05 jsemrau

If you face this error when creating your agent with create_react_agent, note that the chain in this function formats the agent_scratchpad as a string. If you construct your prompt manually, pay attention to define this variable as a simple prompt template variable and not as a message placeholder, i.e. use something like ("human", "{agent_scratchpad") instead of MessagesPlaceholder(variable_name="agent_scratchpad").

mrtj avatar May 24 '24 15:05 mrtj