Simon Willison
Simon Willison
It would be useful if there was a way to pipe in content to be embedded (both to `llm embed` and to `llm embed-multi`) and specify that it should be...
This is odd: ``` llm chat -T QuickJS -m qwen3:4b --td Chatting with qwen3:4b Type 'exit' or 'quit' to exit Type '!multi' to enter multiple lines, then '!end' to finish...
We have two configuration languages right now: - `llm -f issue:https://github.com/simonw/llm/issues/987` for fragment and template loaders - `llm -T 'Datasette("https://...")'` for Toolboxes I am considering making the latter form available...
These days, I think llm-ollama and llm-llama-server are the best options for local models for most people. Mainly because they run as a separate process, which means that the model...
The design of `llm.Toolbox` currently assumes that the list of available tools is fixed. It is quite inconvenient to enable or disable more tools as part of initializing the instance....
From: - https://github.com/simonw/llm-mistral/issues/29 The new Mistral code embedding model has this option: > We also provide `output_dtype` and `output_dimension` parameters that allow you to control the type and dimensional size...
I'm not sure how best to go about this, but there's definitely demand for custom display of LLM output, including this excellent looking library for rendering streaming Markdown: https://github.com/day50-dev/Streamdown It's...
Usability improvement - you get an confusing error at the moment: https://discord.com/channels/823971286308356157/1128504153841336370/1377102845467427007 ```python import llm class Foo: def foobar(self, input: str) -> str: """ Description of tool goes here. """...
According to https://news.ycombinator.com/item?id=44110584#44111864 > was running into this too until I started upgrading with > > llm install -U llm > > instead of > > uv tool upgrade llm