llm icon indicating copy to clipboard operation
llm copied to clipboard

Add `prefill` option, along with implementation for OpenAI

Open jph00 opened this issue 2 months ago • 0 comments

This is an implementation of prefill in the API. It's not yet implemented in the CLI -- submitting this for review first. We've done the OpenAI implementation of it. OpenAI has the slightly tricky feature that it sometimes adds the prefill to its own response, and sometimes doesn't -- so we look for it, and only add it if needed (on the basis that all model plugins will be expected to include the prefill in their response, which is generally what you'll want as a user). For non-streaming, it's basically trivial to implement. For streaming, it requires keeping track of enough chunks until you can see whether the prefill is being output or not, and accruing them -- it's not rocket science, but it's slightly awkward to keep track of, so we've created tests with all the edge cases we can think of (and they even pass!)

Example usage:

response = model.prompt("What are some favourite colors that people often like? Tell me all about what they like most about it.", prefill="Favourite colors", stream=True)

Partially fixes #463 .

jph00 avatar Apr 25 '24 03:04 jph00