langgraph icon indicating copy to clipboard operation
langgraph copied to clipboard

Using openai_function_agent in Langgraph returned intermediate_steps, which caused the agent to answer incorrectly.

Open lironezra opened this issue 6 months ago • 4 comments

Checked other resources

  • [X] I added a very descriptive title to this issue.
  • [X] I searched the LangGraph/LangChain documentation with the integrated search.
  • [X] I used the GitHub search to find a similar question and didn't find it.
  • [ ] I am sure that this is a bug in LangGraph/LangChain rather than my code.
  • [X] I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question.

Example Code

class AgentState(TypedDict):
    # The input string
    input: str
    # The list of previous messages in the conversation
    chat_history: list[BaseMessage]
    # chat_history: Annotated[list[BaseMessage], operator.add]
    # The outcome of a given call to the agent
    # Needs `None` as a valid type, since this is what this will start as
    agent_outcome: Union[AgentAction, AgentFinish, None]
    # List of actions and corresponding observations
    # Here we annotate this with `operator.add` to indicate that operations to
    # this state should be ADDED to the existing values (not overwrite it)
    intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]
    # intermediate_steps: list[tuple[AgentAction, str]]
    # The result of the RAG - extra information for the model if there is an answer from the FAQS step to the user question.
    rag_result: str

    #* External parameters from the request
    execRAG: bool
    agentPrompt: str

# Define the agent Node
def run_agent(data: AgentState):
    if not data["chat_history"]:
        data["chat_history"] = []

    conversation_history = data["chat_history"]
    agent_system_prompt = data["agentPrompt"]
    
    final_llm_node = agent_llm.with_config(tags=["final_node"])
    final_agent_prompt = build_agent_prompt("""{}""", agent_system_prompt, BRAND_NAME, BRAND_CODE)
    openai_functions_agent_prompt = ChatPromptTemplate.from_messages(
        [
            SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template=final_agent_prompt)),
            *([] if data["rag_result"] is None else [
                    SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template=automatic_faq_prompt.format(retrieval_answer=data["rag_result"], question=data["input"])))
            ]),
            MessagesPlaceholder("chat_history", optional=True),
            HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')),
            MessagesPlaceholder("agent_scratchpad"),
        ]
    )

    openai_functions_agent = create_openai_functions_agent(final_llm_node, tools, openai_functions_agent_prompt)
    agent_outcome = openai_functions_agent.invoke({"input": data["input"], "intermediate_steps": data["intermediate_steps"], "chat_history": data["chat_history"]})
    

    if agent_outcome.type == "AgentFinish":
        conversation_history = conversation_history + [HumanMessage(content=data["input"])] + agent_outcome.messages


    return {"agent_outcome": agent_outcome, "chat_history": conversation_history}

# Define the function to execute tools
def execute_tools(data: AgentState):
    # This a helper class we have that is useful for running tools
    # It takes in an agent action and calls that tool and returns the result
    tool_executor = ToolExecutor(tools)

    # Get the most recent agent_outcome - this is the key added in the `agent` above
    agent_action = data["agent_outcome"]
    output = tool_executor.invoke(agent_action)

    return {
        "intermediate_steps": [(agent_action, str(output))],
    }

Error Message and Stack Trace (if applicable)

No response

Description

I am encountering an issue while building an agent using Langgraph with the create_openai_functions_agent function. The problem arises with intermediate_steps, which seems to cause the agent to repeat its previous responses to user queries following a tool invocation and reinitialization of intermediate_steps.

Here is how my graph looks like: image

I've attached an image that illustrates the scenario:

image

As shown in the image, the user initially asks about their order and receives a relevant response. However, when the user subsequently asks an unrelated question, the agent erroneously repeats the response to the first question. It's important to note that the model possesses the correct information to respond to the second question but seems to revert to using intermediate_steps for its response.

To diagnose whether the issue was specifically related to intermediate_steps, I replicated the scenario in the Langsmith playground by disabling intermediate_steps. This change led to the agent providing the correct answer, indicating that the issue indeed pertains to how intermediate_steps are implemented or invoked.

Here are some images from Langsmith that shown the issue, and when I remove the intermediate_steps the agent answer correctly:

Without intermediate_steps - agent answer correctly image

With intermediate_steps - Agent answer wrong! image

Questions:

  1. Is there a way to disable intermediate_steps in Langgraph using create_openai_functions_agent?
  2. Could there be an error in how I have implemented intermediate_steps?
  3. there is any other solution for this case? I have been stuck on this issue for several days and am unsure how to resolve it. Any guidance or suggestions would be greatly appreciated.

If someone wants to schedule a Zoom appointment with me for further assistance, I would be happy to do so :)

System Info

aiohttp==3.9.1 aiosignal==1.3.1
aiostream==0.5.2
aniso8601==9.0.1
anyio==4.3.0 argilla==0.0.1
asttokens==2.4.1
async-timeout==4.0.2
attrs==22.2.0 backoff==2.2.1
beautifulsoup4==4.12.3 bidict==0.23.1
boto3==1.34.117
botocore==1.34.117
build==1.2.1 CacheControl==0.14.0 cachetools==5.3.2 certifi==2022.12.7 cffi==1.16.0 charset-normalizer==3.1.0 cleo==2.1.0 click==8.1.3 cohere==5.5.4 colorama==0.4.6 comm==0.2.2 contourpy==1.2.1 crashtest==0.4.1 cycler==0.12.1 dataclasses-json==0.5.7 debugpy==1.8.2 decorator==5.1.1 Deprecated==1.2.14 distlib==0.3.8 distro==1.9.0 dnspython==2.3.0 docopt==0.6.2 dulwich==0.21.7 elastic-transport==8.13.1 elasticsearch==8.13.2 et-xmlfile==1.1.0 eventlet==0.33.3 executing==2.0.1 fastapi==0.109.1 fastavro==1.9.4 fastjsonschema==2.19.1 filelock==3.13.3 fonttools==4.53.1 frozenlist==1.3.3 fsspec==2024.3.1 gevent==22.10.2 gevent-websocket==0.10.1 greenlet==2.0.2 groq==0.9.0 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 httpx-sse==0.4.0 huggingface-hub==0.23.2 idna==3.4 importlib-metadata==6.11.0 installer==0.7.0 ipykernel==6.29.5 ipython==8.26.0 itsdangerous==2.1.2 jaraco.classes==3.4.0 jedi==0.19.1 Jinja2==3.1.2 jmespath==1.0.1 joblib==1.3.2 jsonpatch==1.33 jsonpointer==2.4 jupyter_client==8.6.2 jupyter_core==5.7.2 keyring==24.3.1 kiwisolver==1.4.5 langchain==0.2.1 langchain-cohere==0.1.5 langchain-community==0.2.1 langchain-core==0.2.18 langchain-groq==0.1.5 langchain-openai==0.1.8 langchain-pinecone==0.1.1 langchain-text-splitters==0.2.0 langchainhub==0.1.20 langgraph==0.1.8 langsmith==0.1.85 Levenshtein==0.25.1 llama-index==0.9.8.post1 loguru==0.7.0 lxml==5.2.0 Markdown==3.6 MarkupSafe==2.1.2 marshmallow==3.20.2 marshmallow-enum==1.5.1 matplotlib==3.9.1 matplotlib-inline==0.1.7 more-itertools==10.2.0 motor==3.5.1 msg-parser==1.2.0 msgpack==1.0.8 multidict==6.0.4 mypy-extensions==1.0.0 nest-asyncio==1.6.0 nltk==3.8.1 numpy==1.24.2 olefile==0.47 openai==1.30.5 openapi-schema-pydantic==1.2.4 openpyxl==3.1.2 opentelemetry-api==1.25.0 opentelemetry-sdk==1.25.0 opentelemetry-semantic-conventions==0.46b0 orjson==3.10.0 packaging==23.2 pandas==2.2.1 parso==0.8.4 pexpect==4.9.0 pillow==10.3.0 pinecone-client==3.2.2 pipreqs==0.4.12 pkginfo==1.10.0 platformdirs==4.2.0 poetry==1.8.2 poetry-core==1.9.0 poetry-plugin-export==1.7.1 prompt_toolkit==3.0.47 psutil==6.0.0 ptyprocess==0.7.0 pure-eval==0.2.2 pycparser==2.22 pydantic==1.10.7 Pygments==2.18.0 pymongo==4.8.0 pyodbc==5.0.1 pypandoc==1.13 pyparsing==3.1.2 pypdf==3.8.1 pyproject_hooks==1.0.0 python-dateutil==2.8.2 python-docx==1.1.0 python-dotenv==1.0.1 python-engineio==4.9.0 python-Levenshtein==0.25.1 python-magic==0.4.27 python-pptx==0.6.23 python-socketio==5.11.2 pytz==2023.3 pywin32==306 pywin32-ctypes==0.2.2 PyYAML==6.0 pyzmq==26.0.3 rapidfuzz==3.8.1 redis==5.0.1 regex==2023.3.23 requests==2.31.0 requests-toolbelt==1.0.0 s3transfer==0.10.1 shellingham==1.5.4 simple-websocket==1.0.0 six==1.16.0 sniffio==1.3.1 soupsieve==2.5 SQLAlchemy==2.0.23 stack-data==0.6.3 starlette==0.35.1 tenacity==8.2.2 tiktoken==0.7.0 tokenizers==0.15.2 tomlkit==0.12.4 tornado==6.4.1 tqdm==4.65.0 traitlets==5.14.3 trove-classifiers==2024.3.25 types-requests==2.31.0.6 types-urllib3==1.26.25.14 typing-inspect==0.8.0 typing_extensions==4.9.0 tzdata==2024.1 Unidecode==1.3.8 unstructured==0.6.2 urllib3==1.26.15 uvicorn==0.27.0.post1 virtualenv==20.25.1 wcwidth==0.2.13 Werkzeug==2.2.3 wikipedia==1.4.0 win32-setctime==1.1.0 wrapt==1.16.0 wsproto==1.2.0 XlsxWriter==3.2.0 yarg==0.1.9 yarl==1.8.2 zipp==3.18.1 zope.event==5.0 zope.interface==6.2

Platform: Windows

Python version: Python 3.11.2

lironezra avatar Aug 11 '24 08:08 lironezra