llm
llm copied to clipboard
Annotations
- #716
TODO:
- [x] Documentation for
execute()plugin authors (which is not yet implemented yet) - [ ] Handle model
.execute()methods that yieldChunkin addition tostr - [x] Spin up a dummy model with annotations that I can start writing tests against
- [ ] Implement
Response.chunks()method described here for Python API, with docs - [ ]
llm promptshould display annotations correctly as they are yielded by the new.execute()method - [ ] Redo the work to add annotation display to
llm logs- previous prototype is here. - [ ]
llm-openai-pluginto use annotations forweb_searchtool - [ ]
llm-anthropicplugin to use annotations for Claude citations
I hope I got that type stuff right in https://github.com/simonw/llm/pull/847/commits/2ce2510945382af203745d28a01e6c04359b029a - which cover what happens when you do for chunk in model.prompt(...) and the execute() method may return str or Chunk objects. Since Chunk has a __str__() method I hope things continue to work OK.
Spin up a dummy model with annotations that I can start writing tests against
Idea: I could have this model class live in the llm source code but NOT be registered as a model. Then the tests could register it with a temporary plugin but I could also build an llm-debug plugin that, when installed, registers the model for real as well. This would make it easier for me to test things interactively with the llm CLI.
I have that llm-debug plugin now:
import llm
from llm.examples import Markov, AnnotationsModel, AnnotationsModelAsync
@llm.hookimpl
def register_models(register):
register(Markov())
register(AnnotationsModel(), AnnotationsModelAsync())
Can't package and ship it until I have at least an alpha with llm/examples.py in it.