Simon Willison
Simon Willison
I built a prototype. Here's a fun thing where I ran it against `mlx-community/Llama-3.2-3B-Instruct-4bit` via `llm-mlx`: ```bash llm -m mlx-community/Llama-3.2-3B-Instruct-4bit 'a poem about a badger' -u ``` ``` In twilight...
```bash files-to-prompt llm -c | llm -f - -m g25f -s \ 'identify all the places I would need to inform about tools - so far I just added tools:...
Got this working: ```pycon >>> import llm >>> model = llm.get_model("gpt-4.1-mini") >>> model.prompt("hi", tools=llm.models.Tool.function(lambda s: s.upper(), 'upper')) >>> model.prompt("hi", tools=llm.models.Tool.function(lambda s: s.upper(), 'upper')).prompt.tools Tool(name='upper', description=None, input_schema={'properties': {'s': {'type': 'string'}}, 'required':...
I won't know if I've got this right until everything is working together, so for the moment I'm going to move on to: - #936
Worth noting that I've built this for blocking sync functions so far, but I'm actually going to want this to work with async functions too.
In https://github.com/simonw/llm/issues/937#issuecomment-2869083972 I realized that I need to stash a reference to the function itself in the tool, otherwise the later code won't be able to turn `get_weather()` into an...
Got this working: ```pycon >>> import llm ... model = llm.get_model("gpt-4.1-mini") ... ... def get_weather(city: str) -> str: ... """Get the weather for a given city.""" ... return f"The weather...
I'm reconsidering `output_schema`. I had o3 do some research - https://chatgpt.com/share/681fa714-8d2c-8006-bc1d-e34405226a7a - and it looks like OpenAI and Anthropic expect strings as return values and Gemini allows arbitrary JSON but...
This works nicely now, thanks to: - #937
This may be the harder design problem (than #935 and #936). The way these are represented in different LLM APIs may differ quite a bit. Let's figure that out: Anthropic's...