Still can't return direct with multi-action agent
I'm still getting this error even after this discussion was solved.
Am I doing something wrong? This is my snippet of code
const agent = RunnablePassthrough.assign({
agent_scratchpad: x => generatedScratchpad((x.intermediateSteps ?? []) as AgentStep[]),
}).pipe(prompt).pipe(this.verboseRunnable).pipe(currentLLM).pipe(new ProceduresOutputParser())
const agentExecutor = AgentExecutor.fromAgentAndTools({
agent,
tools,
returnIntermediateSteps: true,
maxIterations: 5,
})
that throws the error: https://github.com/langchain-ai/langchainjs/blob/ed1f3fbcafdb494e7807f60d5ede74cf75f57b4b/langchain/src/agents/executor.ts#L433
Originally posted by @zAlweNy26 in https://github.com/langchain-ai/langchainjs/discussions/4631#discussioncomment-9563797
Hey @zAlweNy26! I'm here to help you with any bugs, questions, and becoming a contributor. While you're waiting for a human maintainer, feel free to ask me anything.
The error you're encountering is likely due to the streamRunnable configuration in your RunnableMultiActionAgent. By default, streamRunnable is set to true, which expects a single chunk of output. If multiple chunks are received, it throws an error.
To resolve this, set streamRunnable to false when initializing your RunnableMultiActionAgent:
const multiActionAgent = new RunnableMultiActionAgent({
runnable: yourRunnableInstance,
streamRunnable: false, // Set this to false to avoid streaming issues
// other necessary fields
});
Additionally, ensure that none of the tools used by the multi-action agent have the returnDirect property set to true, as this is not supported for multi-action agents [1][2].
To continue talking to Dosu, mention @dosu.
@dosu but in the discussion the maintainer said it was an oversight and that the tools with returnDirect set to true should work with multi-action agents.
The maintainer's comment about it being an oversight suggests that the restriction might be reconsidered in the future. However, as of now, the code still enforces this restriction. The AgentExecutor constructor throws an error if any tool with returnDirect is used with a multi-action agent:
if (this.agent._agentActionType() === "multi") {
for (const tool of this.tools) {
if (tool.returnDirect) {
throw new Error(
`Tool with return direct ${tool.name} not supported for multi-action agent.`
);
}
}
}
This means that, currently, you need to ensure that none of the tools used with a multi-action agent have the returnDirect property set to true to avoid this error [1][2].
To continue talking to Dosu, mention @dosu.
https://github.com/langchain-ai/langchainjs/pull/4643#issuecomment-1986677391
The reason for not removing this detection mechanism in RunnableMultiActionAgent is that if you want to bypass this restriction, you can use RunnableSingleActionAgent. You can try the code below:
const agent = AgentRunnableSequence.fromRunnables(
[
RunnablePassthrough.assign({
agent_scratchpad: x => generatedScratchpad((x.intermediateSteps ?? []) as AgentStep[]),
}),
prompt,
this.verboseRunnable,
currentLLM,
new ProceduresOutputParser(),
],
{
name: “YourAgentName",
streamRunnable,
singleAction: true,
}
);
@zAlweNy26
@jeasonnow thanks for the reply, I'll try it!
@jeasonnow it seems that the AgentRunnableSequence is not exported in the index.ts file of the agents.
Also, based on the TODO on the class, it seems deprecated, do you know what is the new alternative to it?
Try this
import { RunnableSequence } from "@langchain/core/runnables”;
const agent = RunnableSequence.from([
RunnablePassthrough.assign({
agent_scratchpad: x => generatedScratchpad((x.intermediateSteps ?? []) as AgentStep[]),
}),
prompt,
this.verboseRunnable,
currentLLM,
new ProceduresOutputParser(),
], 'YourAgentName');
agent.singleAction = true;
@jeasonnow the singleAction property does not seem to exist in RunnableSequence class
@jeasonnow it seems that the
AgentRunnableSequenceis not exported in the index.ts file of the agents. Also, based on the TODO on the class, it seems deprecated, do you know what is the new alternative to it?
Alternative in general is LangGraph
Will still try to look into this as it seems like a thorn for lots of folks but we generally see LangGraph as the more flexible future.