Simon Willison

Results 787 issues of Simon Willison

```bash llm templates show invalid-template-name ``` Output: ``` ... File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.10/site-packages/llm/cli.py", line 2102, in templates_show template = load_template(name) File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.10/site-packages/llm/cli.py", line 3569, in load_template raise LoadTemplateError(f"Invalid template: {name}") llm.cli.LoadTemplateError: Invalid...

bug
developer-experience

I think templates are the natural place to define collections of tools that can be referenced together. Maybe fragments as well? Fragments have the advantage that you can apply more...

templates
fragments
tools

Really weird bug. I wrote about this here: https://simonwillison.net/2025/May/18/qwen25vl-in-ollama/ ```bash llm install llm-ollama ollama pull qwen2.5vl llm -a https://static.simonwillison.net/static/2025/poster.jpg \ 'convert to markdown' -m qwen2.5vl ``` I got back: >...

bug
help wanted
attachments

> For price prediction, I'd really like LLM to grow a token counting features. For both Claude and Gemini you can send a prompt (with attachments and system prompts and...

enhancement
design

I found myself wanting to look at logs from just the last 2 weeks so I tried this out in a branch. Not yet decided if I should land this.

enhancement
research

Here: https://github.com/simonw/llm/blob/7f49cc254b1fdc2eb1555a03627d31bf38337926/llm/cli.py#L905-L910 That's from before I refactored how keys work in 6c6b100f3ee16983e9a3d9ec09aecb2b91210ed7 Do we still need that branch?

refactor
research

See https://simonwillison.net/2025/May/7/gemini-images-preview/ - it's now possible to get a response back from `gemini-2.0-flash-preview-image-generation` which includes inline images like this: ```json { "candidates": [ { "content": { "parts": [ { "text":...

enhancement
design

Related but not the same as: - #1067 We still have to provide our own implementation of the tool, but the syntax for including it in the prompt may look...

plugins
design
tools

> In digging through the code I have a nasty suspicion that attachments are _not_ fully `asyncio` safe - I think sometimes when an async model is executing a prompt...

refactor
design
attachments
asyncio

Refs: - #1014 TODO: - [x] Add `ToolOutput` mechanism for returning attachments - [x] Get that working in the `.chain()` mechanism - [x] Get it working in default OpenAI plugin...

attachments
tools