langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Creating conversational bots with memory, agents, and tools

Open Travis-Barton opened this issue 1 year ago • 7 comments

Hey all,

I'm trying to make a bot that can use the math and search functions while still using tools. What I have so far is this:

from langchain import OpenAI, LLMMathChain, SerpAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.prompts import (
    ChatPromptTemplate,
    MessagesPlaceholder,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate
)

import os
os.environ["OPENAI_API_KEY"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."

llm = ChatOpenAI(temperature=0)
llm1 = OpenAI(temperature=0)
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm1, verbose=True)

tools = [
    Tool(
        name="Search",
        func=search.run,
        description="useful for when you need to answer questions about current events. "
                    "You should ask targeted questions"
    ),
    Tool(
        name="Calculator",
        func=llm_math_chain.run,
        description="useful for when you need to answer questions about math"
    )
]
prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. "
                                              "The AI is talkative and provides lots of specific details from "
                                              "its context. If the AI does not know the answer to a question, "
                                              "it truthfully says it does not know."),
    MessagesPlaceholder(variable_name="history"),
    HumanMessagePromptTemplate.from_template("{input}")
])

mrkl = initialize_agent(tools, llm, agent="chat-zero-shot-react-description", verbose=True)
memory = ConversationBufferMemory(return_messages=True)
memory.human_prefix = 'user'
memory.ai_prefix = 'assistant'
conversation = ConversationChain(memory=memory, prompt=prompt, llm=mrkl)

la = conversation.predict(input="Hi there! 123 raised to .23 power")

Unfortunately the last line gives this error:

Traceback (most recent call last):
  File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/code.py", line 90, in runcode
    exec(code, self.locals)
  File "<input>", line 1, in <module>
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationChain
llm
  Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, generate_prompt (type=type_error)

How can I make a conversational bot that also has access to tools/agents and has memory?

(preferably with load_tools)

Travis-Barton avatar Mar 29 '23 04:03 Travis-Barton

You could check out https://python.langchain.com/en/latest/modules/agents/toolkits.html# for inspiration.

Related blog: https://blog.langchain.dev/agent-toolkits/

ajndkr avatar Mar 29 '23 15:03 ajndkr

@Travis-Barton - Were you able to make this work? I'm stuck here as well.

matthiasthomas avatar Apr 07 '23 14:04 matthiasthomas

@matthiasthomas yeah! I did this:

from langchain.agents import ZeroShotAgent, Tool, AgentExecutor
from langchain import OpenAI, LLMChain
from langchain.utilities import GoogleSearchAPIWrapper
from langchain import OpenAI, LLMMathChain, SerpAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory


import os
os.environ["OPENAI_API_KEY"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."

llm = ChatOpenAI(temperature=0)
llm1 = OpenAI(temperature=0)
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm1, verbose=True)

tools = [
    Tool(
        name="Search",
        func=search.run,
        description="useful for when you need to answer questions about current events. "
                    "You should ask targeted questions"
    )
]

prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
suffix = """Begin!"

{chat_history}
Question: {input}
{agent_scratchpad}"""

prompt = ZeroShotAgent.create_prompt(
    tools,
    prefix=prefix,
    suffix=suffix,
    input_variables=["input", "chat_history", "agent_scratchpad"]
)
memory = ConversationBufferMemory(memory_key="chat_history")

llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)

agent_chain.run(input="How many people live in canada?")

agent_chain.run(input="Whats the national Anthem of that nation?")
agent_chain.run(input="What is the capital of that nation?")

Travis-Barton avatar Apr 07 '23 17:04 Travis-Barton

Were you able to fix this?

/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 792, in _call
    next_step_output = self._take_next_step(
  File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 672, in _take_next_step
    output = self.agent.plan(intermediate_steps, **inputs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 385, in plan
    return self.output_parser.parse(full_output)
  File "/usr/local/lib/python3.9/dist-packages/langchain/agents/mrkl/output_parser.py", line 20, in parse
    raise ValueError(f"Could not parse LLM output: `{text}`")
ValueError: Could not parse LLM output: `Thought: This is just a greeting, no specific action needed.`

Kav-K avatar Apr 24 '23 16:04 Kav-K

@Kav-K I was not, I just changed my method

Travis-Barton avatar Apr 24 '23 16:04 Travis-Barton

@Travis-Barton Can you please share the code that worked, i have the same issue. Thanks

haqqibrahim avatar Apr 30 '23 08:04 haqqibrahim

I was having the same issue, and I used this approach and worked.

tools = [
    Tool(
        name="Search",
        func=search.run,
        description="useful for when you need to answer questions about current events. "
                    "You should ask targeted questions"
    )
]    

llm = ChatOpenAI(temperature=1, client=None)
memory = ConversationSummaryBufferMemory(llm=llm, memory_key="chat_history", return_messages=True, human_prefix="user", ai_prefix="assistant")  
system_prompt_template = " An AI Assistant .... "

custom_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools, system_message=system_prompt_template)
agent_executor = AgentExecutor.from_agent_and_tools(agent=custom_agent, tools=tools, memory=memory)
agent_executor.verbose = True

print(agent_executor.run("How many people live in canada?"))

Basically this uses LLMChain under the hood and the key is this ConversationalChatAgent class

jasielmacedo avatar May 03 '23 22:05 jasielmacedo

Hi, @Travis-Barton! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

Based on the comments, it seems like you encountered an error when trying to instantiate the ConversationChain class to create a conversational bot with memory, agents, and tools. However, you were able to resolve the issue by checking out the documentation and related blog for inspiration. Great job on finding a solution!

On the other hand, Kav-K encountered a different error and asked for help, but it remains unresolved at the moment. If you are still experiencing the same issue or have any updates, please let the LangChain team know by commenting on this issue.

If the issue is no longer relevant or you have resolved it yourself, feel free to close the issue. Otherwise, if there is no further activity, the issue will be automatically closed in 7 days.

Thank you for your contribution to the LangChain repository!

dosubot[bot] avatar Sep 22 '23 16:09 dosubot[bot]