llm
llm copied to clipboard
Support tool calling.
This supports LLM tool calling. Plugins can register new tools (python callables that are introspected to generate JSON schema), and models are provided access to installed tools. Adds tool calling to the openai Chat model.
Bundles a default read_files tool.
Here is a sample brave_search tool in a plugin
https://github.com/rectalogic/llm-tools/blob/dev/src/llm_tools/brave_search.py
- add register_tools hookspec to allow plugins to register tools
- add @llm.Tool decorator that introspects a callables type signature and generates a JSON schema
- expose installed tools to Models via new
toolsModel property - implement tool calling in the openai Model
- register a default
read_filestool
Note: this pins pytest_httpx until broken tests are fixed in https://github.com/simonw/llm/pull/580 Note: this requires openai>=1.40.0 Note: this drops 3.8 since a number of dependencies no longer support it, and adds 3.13 support
Hmm, tool call responses can result in more tool calls, so I made openai loop. e.g.
> llm chat --enable-tools
Chatting with gpt-4o-mini
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> Search Brave for the query "hurricane milton" and write the first line of the esponse to /tmp/hurricane
Tool: brave_search(query=hurricane milton)
Tool: bash(command_line=echo 'Milton made landfall near Siesta Key, Florida, as a dangerous Category 3 hurricane before weakening as it cut through the state.' > /tmp/hurricane)
result
The first line of the search result for "hurricane milton" has been successfully written to `/tmp/hurricane`. The content states:
"Milton made landfall near Siesta Key, Florida, as a dangerous Category 3 hurricane before weakening as it cut through the state."
I'm working on this feature over here now: #898
There are some great ideas in here, sorry for not providing a full review!