langchain
langchain copied to clipboard
How do I coax the "conversational-react-description" agent to use Wolfram Alpha
Hi in this Wolfram Alpha demo colab notebook if you enter
How many ping pong balls fit into a jumbo jet?
… Wolfram Alpha returns "31 million" but the conversational agent decides to choose "that's a lot of ping pong balls".
I'm curious how the agent decides which tool to use and how to improve this so WA is selected in this case.
i've read enough to answer this question. The agent will inject instructions or examples into the prompt template. The template will contain descriptions of the WolframAlpha Tool followed by instruction how the tools are used. The rest is on LLM. when you ask the question, the question is sent to LLM along with the prompt (that contains the instruction as well as the description of the WA tool). The rest is on LLM, the language model will decide if WA is going to be used based on the context of the question + description of the tool.
So the short answer is, you can modify the template (maybe the tool description, or simply say use WA for xxx type of question) to guide LLM to output WA for the types of question.
Have been curious about this stuff. So if the OpenAI chat model is being used, does chat-conversational-react-description
ask GPT something like
here's what the user asked: "users question here"
The Wolfram tool does this: ...some description of the tool...
Do you have an answer? Or should I use the Wolfram tool?
Would be interested to see the underlying q's that are being asked to figure out when to use a tool and how to bias langchain to use a tool.
after you created an agent instance, you can find the template within its agent attribute and somewhere, the template will show tool descriptions and how the tools are used (in a step by step manner).
When constructing template, agent etc, there's an option to return intermediate answers which will show you which tool is being used and what inputs are sent to the tool and what results came back. The rest is on LLM to come up with next step, i.e. either call another tool or give answer or something else. This continues until LLM decides it is coming up with an answer.
Have been curious about this stuff. So if the OpenAI chat model is being used, does
chat-conversational-react-description
ask GPT something likehere's what the user asked: "users question here" The Wolfram tool does this: ...some description of the tool... Do you have an answer? Or should I use the Wolfram tool?
Would be interested to see the underlying q's that are being asked to figure out when to use a tool and how to bias langchain to use a tool.
here's an example template:
Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Wolfram Alpha: A wrapper around Wolfram Alpha. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.
To use a tool, please use the following format:
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [Wolfram Alpha]
Action Input: the input to the action
Observation: the result of the action
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
Thought: Do I need to use a tool? No
AI: [your response here]
Previous conversation history: {history}
New input: {input} {agent_scratchpad}
here's an example intermediate steps using wolfram alpha:
Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes Action: Wolfram Alpha Action Input: Solve x+y=10 and x-y=4 Observation: Assumption: solve x + y = 10 x - y = 4 Answer: x = 7 and y = 3 Thought: Do I need to use a tool? No AI: The answer is x = 7 and y = 3.
Finished chain.
Does it resolve to Wolfram Alpha for my example?
“How many ping pong balls fit in a jumbo jet?”
On Thu, 13 Apr 2023 at 11:30, binbinxue @.***> wrote:
Have been curious about this stuff. So if the OpenAI chat model is being used, does chat-conversational-react-description ask GPT something like
here's what the user asked: "users question here"
The Wolfram tool does this: ...some description of the tool...
Do you have an answer? Or should I use the Wolfram tool?
Would be interested to see the underlying q's that are being asked to figure out when to use a tool and how to bias langchain to use a tool.
here's an example template:
Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Wolfram Alpha: A wrapper around Wolfram Alpha. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.
To use a tool, please use the following format:
Thought: Do I need to use a tool? Yes Action: the action to take, should be one of [Wolfram Alpha] Action Input: the input to the action Observation: the result of the action
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
Thought: Do I need to use a tool? No AI: [your response here]
Previous conversation history: {history}
New input: {input} {agent_scratchpad}
here's an example intermediate steps using wolfram alpha:
Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes Action: Wolfram Alpha Action Input: Solve x+y=10 and x-y=4 Observation: Assumption: solve x + y = 10 x - y = 4 Answer: x = 7 and y = 3 Thought: Do I need to use a tool? No AI: The answer is x = 7 and y = 3.
Finished chain.
— Reply to this email directly, view it on GitHub https://github.com/hwchase17/langchain/issues/1322#issuecomment-1506729632, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABD62MGMYMJISNDSSK7BNLXA7IT7ANCNFSM6AAAAAAVJIFAAM . You are receiving this because you authored the thread.Message ID: @.***>
Does it resolve to Wolfram Alpha for my example? “How many ping pong balls fit in a jumbo jet?”
again, there're multiple factors at play here. For example, if you change the temperature of the GPT model, you might get different decisions by GPT model. If you change the template passed to the GPT model then GPT might or might not decide to use Wolfram Alpha.
Couple of things you can do, change the tool description to say you want to use Wolfram Alpha more often in this or that case. Rephrase the question in more mathematical way to ensure GPT will pick WA for these types of question. Play with the template to change the instructions to favour these types of questions. The rest is on GPT, how it interprets the prompt and make the decisions are from the black box nature of the machine learning model. There's even no guarantee if you set everything right and send the same question to GPT to expect it to pick WA for it when it did so in the first place. Because when temperature is set to none-zero, it produces probability outputs. When the context text is different, it also influences GPT output.
Thanks so much! This is interesting.
how can i modify the prompt of an agent with chat-conversational-react-description. for context, here is my code:
agent = initialize_agent(tools, chatgpt, "chat-conversational-react-description", verbose=True, max_iterations = 2, early_stopping_method = 'generate', memory = memory)
langchain is just a piece of shit with shit support. A great product doesn't mean keep pushing new features while there is so much mess in the existing features. In all the excitement to ship out new features they forget the actual purpose.
@sahil-lalani Had the same question and didn't want to create a whole custom agent just to add date etc. The way I've done it for now is first initialize the agent - I called mine agent_chain and then overwrote the prompt template
agent_chain.agent.llm_chain.prompt.messages[0].prompt.template = "Whatever you want here"
This replaces the whole "Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing.....
Existing initial part of the prompt.
Hi, @boxabirds! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue is about making the "conversational-react-description" agent use Wolfram Alpha instead of choosing a generic response when asked a specific question. Users binbinxue and umaar provided explanations on how the agent selects the tool and suggested modifying the template to guide the agent to use Wolfram Alpha. User sahil-lalani asked how to modify the prompt of an agent, and user ColinTitahi provided a solution by overwriting the prompt template.
Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your understanding!