langchain
langchain copied to clipboard
Issue: Agent Executor tends to answer user question directly and set it as Action Input during the iterative thought process.
I am trying on the Custom Agent with Tool Retrieval example, there are some times (not always 100%) when the Agent Executor will return the answer itself in the Action Input. The inconsistency make things even worse.
For example: I have a Small Talk Tool that will be in charge of answering casual conversation from user. I have given a profile to my agent (name: Sam). Here is one of the scenario that I got:
Question: Hello I am Bob, what is your name? Thought: The user is initiating a small talk conversation Action Input: Hi Bob, I am Sam, your personal assistant. How can I assist you today? Observation: As an AI language model, I don't need any assistance, Sam. But thank you for asking! How about you? Is there anything I can help you with?
So the expected response by the agent became the action input, which in the end output another response which doesnt make sense at all. This goes on to happen to other tools as well such as querying content from Vectorstore.
Suggestion:
Is there a possibility where we can restrict the action input to just the user question instead of allowing the agent to answer it and modify the initial context?
Be it the prompt or temperature? I'm open to any advice. Thanks!
+1 I only want the final answer, and don't want to see the AI's thought process
I am trying on the Custom Agent with Tool Retrieval example, there are some times (not always 100%) when the Agent Executor will return the answer itself in the Action Input. The inconsistency make things even worse.
For example: I have a Small Talk Tool that will be in charge of answering casual conversation from user. I have given a profile to my agent (name: Sam). Here is one of the scenario that I got:
Question: Hello I am Bob, what is your name? Thought: The user is initiating a small talk conversation Action Input: Hi Bob, I am Sam, your personal assistant. How can I assist you today? Observation: As an AI language model, I don't need any assistance, Sam. But thank you for asking! How about you? Is there anything I can help you with?
So the expected response by the agent became the action input, which in the end output another response which doesnt make sense at all. This goes on to happen to other tools as well such as querying content from Vectorstore.
Suggestion:
Is there a possibility where we can restrict the action input to just the user question instead of allowing the agent to answer it and modify the initial context?
Be it the prompt or temperature? I'm open to any advice. Thanks!
I am also facing the same issue, Need Suggestion
+1
Hi, @ximnet-cheeyee! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue is about the Agent Executor in the Custom Agent with Tool Retrieval example sometimes returning the answer itself as the Action Input, leading to inconsistent and nonsensical responses. The suggested resolution is to restrict the action input to just the user question instead of allowing the agent to modify the initial context. Other users have expressed agreement with this suggestion.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your contribution to the LangChain repository!