Simon Willison

Results 2645 comments of Simon Willison

Here's a Gemini example: https://ai.google.dev/gemini-api/docs/function-calling?example=meeting#step_4_create_user_friendly_response_with_function_result_and_call_the_model_again ```javascript // Create a function response part const function_response_part = { name: tool_call.name, response: { result } } // Append function call and result of...

The challenge of matching tool call IDs to tool response IDs may become a lot easier if I implement this design change first: - https://github.com/simonw/llm/issues/938#issuecomment-2816986647

I've got most of the pre-requisites for this in the `tools/` branch now: https://github.com/simonw/llm/commits/f8cd7be60097161da1968335ba78e3e3942899a3/ Here's where I'm at: ```python import llm model = llm.get_model("gpt-4.1-mini") def get_weather(city: str) -> str: """Get...

Problems to solve: 1. Executing the functions. I realize now that I forgot to stash the actual function in `prompt.tools` - so right now we don't have a useful way...

A `Response` has an optional `.conversation` property referencing a `Conversation` or `AsyncConversation`. I believe this is `None` for prompts that started directly using `model.prompt(...)`. On that basis, I think this...

At some point we will want code that executes tools in a loop - so you pass in a prompt with some tools and then keep on executing those tool...

I need an abstraction like `ToolCall` for a result that then gets sent to the model - I'm going to create something called `ToolResult`. Needs to handle: Anthropic: ```json {...

https://github.com/simonw/llm/blob/614941dbe5f4ef56ba4ca2ef4b9321c163ca301e/llm/models.py#L174-L178

I guess `Prompt` is going to grow a `tool_results: List[ToolResult]` property then.

I decided to use the verb `chain()` for this - for the thing where you end up with a chain of prompts and responses due to tool calls is the...