pydantic-ai icon indicating copy to clipboard operation
pydantic-ai copied to clipboard

How do we implement tools that "do something"

Open samuelcolvin opened this issue 1 year ago • 4 comments

Currently retrievers are tools that are expected to be benign, e.g. have no side effects, so you don't care if the models chooses to call them or not.

Technically there's nothing to stop you from having retrievers with side-effects, and you could even look in message history to see if they were called.

But what is our recommend way of using tools that do something?

I would suggest something like this:

from typing import Literal

from pydantic import BaseModel

from pydantic_ai import Agent


class CreateTicket(BaseModel):
    title: str
    description: str

    async def run(self):
        ...


class DeleteTicket(BaseModel):
    reason: str

    async def run(self):
        ...


class ChangeSeatTicket(BaseModel):
    row: int
    seat: Literal['A', 'B', 'C', 'D']

    async def run(self):
        ...


ticket_agent = Agent(result_type=CreateTicket | DeleteTicket | ChangeSeatTicket)


async def main():
    result = await ticket_agent.run('I would like to create a ticket')
    await result.data.run()

This looks like a bit more logic, but it has some nice advtanges:

  • you can be very clear about when/if the action is called
  • it can take more arguments including agents, or dependencies that let you call other agents
  • with this already, each pydantic base model is registered as a tool in the LLM, so it's just as easy for the model
  • you could introduce arbitrary extra logic and control flow to run without it being hidden in the "magic" of PydanticAI

samuelcolvin avatar Nov 21 '24 22:11 samuelcolvin

Also, it's no more code in PydanticAI, just a pattern to document. 😸

samuelcolvin avatar Nov 21 '24 22:11 samuelcolvin

Would data.run output automatically be added to the messages such that the LLM knows about the outcome of the tool call?

ricklamers avatar Nov 21 '24 22:11 ricklamers

The "tool call" message that is parsed to become result.data is added to messages automatically.

The response value from result.data.run() is not added by default, but you could add it very easily if you wanted to continue the conversation.

You might also want to continue using those messages in another agent, inside result.data.run().

One thing I think we've realised over the last few days is that my mental model of our Agents is that they can either manage an entire small workflow, or be a component of a more complex workflow, made up of multiple agents.

You might think of them more like an Agentlet, although I hate that name.

samuelcolvin avatar Nov 21 '24 23:11 samuelcolvin

In general this looks great for most cases.

I would also want the ability to pass tools like you do now with retrievers. It's up to me to decide whether I'm concerned about allowing the LLM to call something that has side-effects.

Maybe an option that builds on this proposal would be to have a ToolBaseModel, that has an implementation of run, and I pass a subclass to the Agent contstructor (or attach it with a decorator) and I can use the validator as the gatekeepr: if I got the result and it validates then I'm OK for it to run and for the result to go into the next LLM call that is triggered automatically. And if I want to use it manually, I pass it as the result_type instead like above.

intellectronica avatar Nov 22 '24 13:11 intellectronica

cc @samuelcolvin, is this still relevant?

sydney-runkle avatar Jan 24 '25 15:01 sydney-runkle

@samuelcolvin @sydney-runkle

We have several examples on the docs that show how to attach tools to agents. Not sure this issue is still needed.

izzyacademy avatar Feb 01 '25 16:02 izzyacademy