Simon Willison
Simon Willison
Got this error: > `openai.BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did...
OK, I now have a working tool implementation against both OpenAI and Ollama - at least for the streaming, synchronous case.
Playing with this dangerous example (exec!): ```python import llm model = llm.get_model("gpt-4.1-mini") def exec_python(code: str) -> str: """Evaluate Python code and return anything output using print""" import io import sys...
This is fun: ```python import llm model = llm.get_model("gpt-4.1-mini") def search_images(q: str) -> str: """Search for images on my blog for the given single word query.""" import httpx response =...
Looking at this code: ```python conversation = model.conversation() for s in conversation.chain(prompt, tools=[llm.Tool.function(search_images)]).details(): print(s, end="", flush=True) ``` I think I want a `model.chain()` method which actually just creates a conversation...
Got this simpler version working instead: ```python import llm model = llm.get_model("gpt-4.1-mini") def search_images(q: str) -> str: """Search for images on my blog for the given single word query.""" import...
This will do for the moment.
I'm trying to figure out if this is a blocker for tools or not.
If I were to do this here's one potential design: - `Response` objects gain a `.reply(prompt)` method, which can be used to reply to that response with a fresh prompt...
I'm going to do a research spike on this in a branch.