pydantic-ai
pydantic-ai copied to clipboard
Manual intervention on tool calls
It seems like for all agents, tool calls are automatic. I.e Pydantic automatically runs the tool and passes the result back and continues the run.
Can I manually intervene such that I want to manually run the tools and manage if I want to continue the run or not depending on the result?
This would be solved by #142.
@dmontagu this is an argument to keep the ctx.end_run(result) idea.
@samuelcolvin I am not sure if I understand the PR. Is it ending the run after the tool execution or before?
For example, if I don't use any framework in OpenAI I have to manually inspect the tool_calls and call them myself. I want to be able to do that here for some agents.
If you want to simply end the with specific data type, just use result_type, you can change the name of that tool with result_tool_name, e.g.:
from pydantic import BaseModel
from pydantic_ai import Agent
class CityLocation(BaseModel):
city: str
country: str
agent = Agent('openai:gpt-4', result_type=CityLocation, result_tool_name='city_location')
result = agent.run_sync('Where the olympics held in 2012?')
print(result.data)
#> city='London' country='United Kingdom'
print(result.cost())
If you want the option to end the run within an arbitrary tool, call ctx.stop_run(result) within a tool once #142 is implemented. Of course, you can call this anywhere in your function.
Hi guys! First off thanks a lot for your work on this library.
I wanted to check in on this: I think the ask here is the ability configure the Agent so that when it decides to make a tool call, it just returns the tool call parameters and perhaps a callable to the actual function, but does not call it itself.
This would enable a frontend chat app to prompt the user for confirmation they want to actually run the tool with the selected parameters, or reject the call and further instruct the model on how to adjust the tool call.
For example, I think something like this could be nice:
@agent.tool
def get_book_location(ctx: RunContext[str], book_title:str, book_author:str=None) -> str:
# consult some database
return book_location
agent = Agent('openai:gpt-4', result_tool_name='get_book_location', auto_run_tools=False)
result = agent.run_sync('On what shelf can I find the Odyssey?')
print(result.data)
#> '{'tool_name': 'get_book_location', 'tool_func': get_book_location, 'tool_call_params': {'ctx': ctx, 'book_title': 'Odissey', 'book_author': 'Homer'}}'
It is not exactly what you asked, but wouldn't 'agent.iter' work in this case?
from pydantic_ai import CallToolsNode
agent = Agent(...)
async with agent.iter("Some prompt.") as aiter:
async for response in aiter:
if isinstance(response, CallToolsNode):
if should_break(response):
break
It is not exactly what you asked, but wouldn't 'agent.iter' work in this case?
from pydantic_ai import CallToolsNode agent = Agent(...) async with agent.iter("Some prompt.") as aiter: async for response in aiter: if isinstance(response, CallToolsNode): if should_break(response): break
Yeah, I think so. The issue is older than the iter introduction.
It is not exactly what you asked, but wouldn't 'agent.iter' work in this case? from pydantic_ai import CallToolsNode
agent = Agent(...) async with agent.iter("Some prompt.") as aiter: async for response in aiter: if isinstance(response, CallToolsNode): if should_break(response): break
Yeah, I think so. The issue is older than the
iterintroduction.
Hi @Kludex but there is no instruction on how to construct the messages manually through the iter :(
I have to do something like this:
async with feedback_agent.iter("",
deps=ctx.deps,
message_history=ctx.state.message_history,
model_settings={"temperature": 0.0, "parallel_tool_calls": False}) as run:
node = run.next_node
while not isinstance(node, End_):
if hasattr(node, 'request'):
new_messages.append(node.request)
elif hasattr(node, 'model_response'):
new_messages.append(node.model_response)
if Agent_.is_call_tools_node(node):
part = node.model_response.parts[0]
tool_name = part.tool_name
args = json.loads(part.args)
tool_call_id = part.tool_call_id
if "final_result" in tool_name:
model_request = ModelRequest(parts=[
ToolReturnPart(
tool_name=tool_name,
content="Final result processed.",
tool_call_id=tool_call_id,
part_kind='tool-return'
)
])
new_messages.append(model_request)
result = End(data=FeedbackOutput(**args))
break
node = await run.next(node)
if isinstance(node, End_):
result = node.data
new_messages = run.result.new_messages()
Can you check on the feature that we can call run.result.new_messages() even when run is not End?
It is not exactly what you asked, but wouldn't 'agent.iter' work in this case? from pydantic_ai import CallToolsNode
agent = Agent(...) async with agent.iter("Some prompt.") as aiter: async for response in aiter: if isinstance(response, CallToolsNode): if should_break(response): break
Yeah, I think so. The issue is older than the
iterintroduction.
Is there an elegant way to resume this in a second iter() call. When I provide iter(message_history=) with a message history that ends in the ToolCall, this tool call is not executed as expected. But doing it manually and also insertig a ToolReturnPart like @dinhngoc267 proposes seems weird to me
Following
My use case is the ability to decide when I need to have 'intelligence' to pick specific tools vs just running a DAG of tools without the agent picking.
My use case is the ability to decide when I need to have 'intelligence' to pick specific tools vs just running a DAG of tools without the agent picking.
The current version of pydantic_ai allows to start iter with a message history with open ToolCallParts which are executed first.
I believe this has been addressed. If not, please file a new issue.