langchain
langchain copied to clipboard
Allow for routing between agents and llmchain
Feature request
I would like to receive a prompt and depending on the prompt route to a dataframe agent or a llmchain. For example, the question may be about the aggregation of data in which I would like to utilize the dataframe agent. If the question is about pdf files I would like to use the llmchain to handle this.
Motivation
The ability to have a universal answer flow is ideal for my scenario. I would like the ability to handle any question and respond with either a response from the dataframe or from llmchain.
Your contribution
I am still going through the documentation and code repo itself. I can certainly work towards a PR. Want to for sure ensure that this is not already possible. Thank you!
hey @thecasual! would this do the trick? https://python.langchain.com/en/latest/modules/chains/generic/router.html
Hey @dev2049 , I am not sure that you are able to route to agents the same way you are able to route to chains. Perhaps I am misunderstanding this concept.
Hey, did you find something that works like this ? Do you want to work on it together ?
I have not found something yet and sure would be happy to work together on this if a solution doesn't already exists.
hI! i'm going through the exact same dead end with this, the moment i found out about router chain i thought this would do the trick but haven't got so far with it... have you solved this?
Same here, it would be super useful to route to an agent for specific data questions and an LLM for a more generic or creative conversation
This is definitely a shortcoming of the framework.
Being able to route to an agent is essential.
I am facing the same issue. I have an agent with several routes that consist of ConversationalRelationChain and LLMChain. The first issue was that each one has a different input. I tried using LOTR (Lord of all Retrievers) to fix the issue but I have been getting the following issue over and over "Saving not supported for this chain type.". I currently am seeing no solution for it. If anyone came up with any solution for it please mention.
Need something like this as well. Did someone figure out a solution yet?
I too just stumbled upon this dilemma today. I initially thought RouterChain would resolve my situation, but I then realized it would have to dynamically switch between different agents, since the chain types and configurations would have to be different between different intents.
I can share how I plan to tackle this dilemma as a workaround for the time being: I'm going to set an initial system prompt that will require a multi-classification output based on the user's inquiry. That classification output will then route the proper agent based on Python "if" conditions. It's not ideal, I know, but may help generate some ideas for others in limbo.
Has anyone found a workaround? I also need this
Same, I am also looking for a solution to this but in the js/ts framework.
There are several ways to do this, here's an example using LangChain Expression Language
# Setting up SQL Agent
from langchain.utilities import SQLDatabase
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
# from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
llm = OpenAI(temperature=0, verbose=True)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0)),
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
# Set up the routing logic
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema import StrOutputParser
from langchain.schema.runnable import RunnableMap
prompt = PromptTemplate.from_template("""If the question is about SQL, respond with `SQL`. Otherwise, respond `OTHER`.
Question: {question}""")
router_chain = prompt | ChatOpenAI() | StrOutputParser()
# Set up the base chain
llm_chain = PromptTemplate.from_template("""Respond to the question:
Question: {input}""") | ChatOpenAI() | StrOutputParser()
# Add the routing logic - use the action key to route
def select_chain(output):
if output["action"] == "SQL":
return agent_executor
elif output["action"] == "OTHER":
return llm_chain
else:
raise ValueError
# Create the final chain
# Generate the action, and pass that into `select_chain`
chain = RunnableMap({
"action": router_chain,
"input": lambda x: x["question"]
})| select_chain
chain.invoke({"question":"How many SQL tables?"})
Thanks for this. I think I achieved something similar in JS/TS. I was putting too much responsibility onto LangChain for this.
@hwchase17 is there any way, lets say I have 50 tables in DB but filter top 5 relevant tables and then use its data for SQL agent..?
@hwchase17 Hey, hope you can help me out here, how do i add memory to the RunnableMap and share the history between the two agents?